You can choose a wiki page from left sidebar or go the the blog for blog archives.

Here are the latest 10 posts with pagination:

How to work-around firefox lack of respect for the CSP specification for CSP reports to Sentry

As stated in https://bugzilla.mozilla.org/show_bug.cgi?id=1192684#c8 Firefox doesn't respect the specification and doesn't include the fields effective-directive or status-code.

Sentry expect them, and then refuse the reports because of that.

To workaround the issue, I used Nginx LUA module to manipulate the JSON body before it is send to the uwsgi backend of Sentry.

Note: use it at your own risks.

The CSP should contains:

Content-Security-Policy: whatever-you; want;... report-uri https://sentry.sigpipe.me/api/the_project_id/csp-report/?sentry_key=your_key&sentry_version=5

Makes sures the module is enabled, on Debian it's something like:

# cat /etc/nginx/modules-enabled/00-mod-http-ndk.conf
load_module modules/ndk_http_module.so;
# cat /etc/nginx/modules-enabled/50-mod-http-lua.conf
load_module modules/ngx_http_lua_module.so;
 
# Makes sure there is:
include /etc/nginx/modules-enabled/*.conf;
# before the http {} directive in /etc/nginx/nginx.conf

I also needed to do in /etc/nginx/nginx.conf because the paths seems wrong by default:

http {
...
	lua_package_path "/usr/share/lua/5.1/?.lua;;";
	lua_package_cpath '/usr/lib/x86_64-linux-gnu/lua/5.1/?.so;;';
...

Packages needed are:

nginx-extras libnginx-mod-http-lua libnginx-mod-http-ndk lua-cjson

Add LUA call in the virtual host of your Sentry:

...
	location ~ ^/api/(?<projet>[0-9]+)/csp-report/ {
		access_by_lua_file /etc/nginx/proxy_csp.lua;
		include uwsgi_params;
		uwsgi_pass 127.0.0.1:9000;
	}

	location / {
		include uwsgi_params;
		uwsgi_pass 127.0.0.1:9000;
	}
...

And the most usefull file:

/etc/nginx/proxy_csp.lua
if ngx.req.get_method() == "POST" then
    local cjson = require "cjson"
 
    -- read body and set local variables, also dump into logs for debugging if needed
    ngx.req.read_body()
    local body = ngx.req.get_body_data()
    --ngx.log(ngx.STDERR, body)
    -- read json body
    local json = cjson.new().decode(body)
 
    -- We need to manipulate the JSON body to add if missing:
    -- effective-directive: the violated directive name
    -- status-code: HTTP status code of the violated directive
 
    if (not json['csp-report']['effective-directive']) then
	-- ugly split thing to get the directive name
    	words = {}
    	local vd = json['csp-report']['violated-directive']
    	for word in vd:gmatch("[a-zA-Z0-9-]+") do table.insert(words, word) end
 
    	if (words[1]) then
        	json['csp-report']['effective-directive'] = words[1]
    	else
        	json['csp-report']['effective-directive'] = 'Unknown violation wrong format string'
    	end
    end
 
    if (not json['csp-report']['status-code']) then
    	json['csp-report']['status-code'] = 200
    end
 
    -- reencode new body
    new_json = cjson.encode(json)
    -- set new body
    ngx.req.set_body_data(new_json)
 
    -- we are done
    return
end
2017/04/19 00:00 · dashie

Use an extended bridge using OpenVSwitch and VXLan over internal network

Classic bridges are local only, with OpenVSwitch and automatic VXLan tunelling if you have a private network between your two servers you can have a bridge on each one linked.
They will have the same subnet, and servers from one side could reach the other without issue.
It's possible to switch from brctl to ovs without issues since there is no required config in the interfaces side of the containers, only setup the bridge and use it.


The eth1vmbr0 or vmbr2 is in fact transparent, you don't add it in the bridge, it's juste “transparently” used by VXLan (because you use the tunnel over the private network).

OpenVSwitch bridges are not compatible with brctl, you should use ovs-vsctl, like

ovs-vsctl show

Requirements

Here we are assuming:

  • Server 1 PRIVATE LAN IP: 192.168.1.4
  • Server 2 PRIVATE LAN IP: 192.168.1.5
  • Bridge name on each server: vmbr0
  • Extended Bridge network: 10.0.0.0
  • Server 1 BRIDGE IP: 10.0.0.1
  • Server 2 BRIDGE IP: 10.0.0.2

Blah

apt install openvswitch-switch openvswitch-common

Create an OpenVSWitch bridge on each server:

ovs-vsctl add-br vmbr0

Config on server1:

/etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
	address 10.0.0.1
	netmask 255.255.255.0
	ovs_type OVSBridge
	post-up ovs-vsctl add-port vmbr0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.5

For server2:

/etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
	address 10.0.0.2
	netmask 255.255.255.0
	ovs_type OVSBridge
	post-up ovs-vsctl add-port vmbr0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.4

Up the network on each:

ifup vmbr0


You may need to reboot to load OpenVSwitch kernel modules.
And you should be able to

ping 10.0.0.2

from server 1 and server 1 from server 2.
You can get OpenVSwitch status config by using:

server1:~# ovs-vsctl show
03edd856-b35a-4c2d-b283-1dfc28ab7abb
    Bridge "vmbr0"
        Port "vmbr0"
            Interface "vmbr0"
                type: internal
        Port "vxlan1"
            Interface "vxlan1"
                type: vxlan
                options: {remote_ip="192.168.1.5"}
        Port "veth2ES9B5"
            Interface "veth2ES9B5"
    ovs_version: "2.3.0"

LXC Notes

LXC Uses brctl and brctl isn't compatible with OpenVSwitch, here is the configuration needed to use the new ovs bridge:

/etc/lxc/ifup
#!/bin/bash
BRIDGE='vmbr0'
ovs-vsctl --may-exist add-br $BRIDGE
ovs-vsctl --if-exists del-port $BRIDGE $5
ovs-vsctl --may-exist add-port $BRIDGE $5
/etc/lxc/ifdown
#!/bin/bash
ovsBr='vmbr0'
ovs-vsctl --if-exists del-port ${ovsBr} $5

In the CT config:

/var/lib/lxc/derpy/config
lxc.network.type = veth
lxc.network.flags = up
lxc.network.name = eth0
lxc.network.script.up = /etc/lxc/ifup
lxc.network.script.down = /etc/lxc/ifdown
lxc.network.ipv4 = 10.0.0.111/24
lxc.network.ipv4.gateway = 10.0.0.100
2017/04/09 00:00 · dashie

File recovery from formatted hard drive

First:

  • testdisk did nothing, found no recoverable partition and then cannot restore files
  • foremost doesn't manage plain ascii text (they don't probably have real markers or whatever)

Now, the happy story:

  • I accidentally formatted the wrong hard drive, ok, that's not that bad.
  • And reinstalled debian on it. Upgraded packages, installed, done some service configurations, mysql, nginx. Oops…
  • I haven't backupped some asterisk configuration because this hard drive wasn't supposed to be formatted.
  • I needed to recover theses files.

Requirements

  • The poor hard drive (/dev/sda)
  • Another hard drive (in my case two USB drives, /mnt/usb1 and /mnt/usb2, each with enough to store the full /dev/sda)
  • I preferred to have two hard drives, one to store the image and another to store whatever extracted from it, may help for usb speed
  • dd, strings, cp, etc.
  • A pizza. (I ate one, this may help you too)

Backup

dd if=/dev/sda of=/mnt/usb1/hdd.bin bs=1M

Make indexes

When finished, you remember some patterns of your config file, right ?

In my case it was something like […ovh…], don't remember the case, if there where a dash or whatever.

Extract all strings matching this:

strings -t d /mnt/usb1/hdd.bin | grep -i "ovh" | grep "\[" | grep "\]" | tee /mnt/usb2/hdd.ovh.strings

-t d will print a decimal offset, we will use that to get some “index” in the hdd.bin file.

After long times, we may have some things:

19527608307 exten => _0[67]X.,1,NoOp(SIP/To-Ovh/P_${EXTEN}, ${TIMEOUT}, ${DIAL_OPTS})
19527609393 [Dp-From-Ovh]

Not really automatic extraction

Now is the best part, start with:

dd if=/mnt/usb1/hdd.bin bs=1 count=100 skip=19527608307

bs will stay at 1, count is the number of bs to show after skip (our “index”).

What i've done is to round the index, like 19527608307 then 19527608300 then 19527608000, etc.

In backward, because you want to get the start of the file. Round as close as possible of the top without getting too many garbage.

You will finally get the start of the file, then increment the count, 100, 1000, 10000, 10500, 11000 etc.

Finally you will may end with SOME VERSION of your config file, there is multiple version, or “revisions” as you edited them in the past.

You will need to do that for every repetitting index found in the file like:

19527627416 [sip-ovh](!)            ; OVH Template
19527627812 [To-Ovh](sip-ovh)
19527627878 [From-Ovh](sip-ovh)
21673001628 [sip-ovh](!)            ; OVH Template
21673002024 [To-Ovh](sip-ovh)
21673002043 [From-Ovh](sip-ovh)
21673005724 [sip-ovh](!)            ; OVH Template
21673006120 [To-Ovh](sip-ovh)
21673006139 [From-Ovh](sip-ovh)

Then do the dd bs count skip for every index (the one matching ; OVH Template, for example), then diff them and found the latest.

You will not have any timestamp or anything unless your config have it in plain text.

Have fun.

2016/11/06 00:00 · dashie

Jenkins: Qt5 and CrossCompilation for Windows

Installing MXE

cd /opt
git clone https://github.com/mxe/mxe.git
cd mxe
make qt5 MXE_TARGETS=i686-w64-mingw32.static
# or .shared but I have only used static now

Configuring the Build step in Jenkins

Build steps
# Env variables
export target=i686-w64-mingw32.static
export mxedir=/opt/mxe/
export releasedir=$PWD/$JOB_NAME.$BUILD_ID/
export PATH=$mxedir/usr/bin:$PATH
# Build
sed -i "s/^#DEFINES/DEFINES/" cutecw.pro
$mxedir/usr/bin/$target-qmake-qt5 cutecw.pro
make
# Creating release dir and copying assets
mkdir -p $releasedir
cp release/cutecw.exe $releasedir/
cp -r books $releasedir/
cp -r icons $releasedir/
cp LICENSE $releasedir/LICENSE.txt
cp *.qm $releasedir/
cp cutecw.cfg.sample $releasedir/cutecw.cfg
# Build info
echo "Build infos" > $releasedir/BUILD.txt
echo "Built with MXE [git:master] and qt5 with target $target" >> $releasedir/BUILD.txt
echo "Jenkins build: $BUILD_TAG" >> $releasedir/BUILD.txt
echo "Build ID $BUILD_ID: " >> $releasedir/BUILD.txt
# Create a zip archive 
zip -r $JOB_NAME.$BUILD_ID.zip $JOB_NAME.$BUILD_ID/
# Creating checksums
md5sum $JOB_NAME.$BUILD_ID.zip > $JOB_NAME.$BUILD_ID.sums
sha256sum $JOB_NAME.$BUILD_ID.zip >> $JOB_NAME.$BUILD_ID.sums
Archivate artifacts
cutecw.zip, cutecw.sums

I have also used the Copy Artifacts over SSH plugin to copy the archive and checksums over the public repository.

2016/09/25 00:00 · dashie

jenkins-debian-glue and LXC

First you can install Jenkins and then follow the guide here to manually install jenkins-debian-glue.

Then you will need to:

  • Install pbuilder from jessie backports
  • Use in the build section for binaries:
Instead of:
/usr/bin/build-and-provide-package
I used (enabling freight too):
export PBUILDER_CONFIG=/etc/pbuilder_lxc
export USE_FREIGHT=true
export SUDO_CMD=sudo
export KEYID="pkg@sigpipe.me"
/usr/bin/build-and-provide-package
rsync -lrt --stats --delete --force --ignore-errors /var/cache/freight/ 10.0.0.101::jenkins-deb-repo-cutecw >/dev/null
/etc/pbuilder_lxc
USEDEVFS=no
USEDEVPTS=no
USESYSFS=no
USEPROC=no
sudo
# If using Reprepro instead of Freight, stick with the sudoer from j-d-g manual
# You will temporarilly need for first success build:
jenkins ALL=NOPASSWD: ALL
# Since there is some /bin/sh cat etc... to build config
# Then use afterwards:
# jenkins ALL=NOPASSWD: /usr/sbin/cowbuilder, /usr/sbin/chroot, /bin/mkdir, /bin/rm -rf, /usr/local/bin/freight
Defaults env_keep+="DEB_* DIST ARCH"
  • Edit /usr/share/debootstrap/functions
  • Line 1027, in_target mount -t sysfs sysfs /sys comment that line

In your LXC host (like proxmox):

  • Edit /etc/apparmor.d/lxc/lxc-default
  • Add mount options=(rw, bind, ro), the line after the deny mount…
  • Reload apparmor /etc/init.d/apparmor reload

Remember that:

  • Most of the time dev, proc and sys are useless
  • You can't mount sysfs in LXC
  • We told pbuilder to not use anything, we don't care for debootstrap except for sysfs

If I have not forgot anything you should be good to go…

2016/09/24 00:00 · dashie

Older entries >>