Automating SSH tunnels

Or, how I managed to update my parents’ home router using a mess of SSH tunnels.

I originally had an Idea some time ago when I moved flat and the new router would not allow to setup DHCP reservations nor port forwarding.. I figured instead of having the router manage these, I could punch a hole through the firewall from the inside and use an internet host + SSH tunnels to gain access into my home network.

More recently, my parents ISP updated their router and I wanted to make some changes to the configuration.. which I can usually only do when I visit them once or twice a year. Typically these routers also seem to lose their configuration every now and then after a thunderstorm, so I couldn’t really rely on their capability to hold on to it either.. Of course, getting family members to understand anything of making changes to the router is a no-go.

Port forwards

Normally, if I were to set up port forwarding on the router with an external port 8022 redirected to remotehost:22, then I can simply ssh to the router’s external-ip:8022 to access my machine.

Given we don’t have that mapping in place, we start off kind of bad.

What do

Two thoughts come to mind:

  1. SSH tunnels
  2. Puppet

Ever since learning about Puppet I grew quite fond of its capabilities and quickly setup a small infrastructure to manage our homeserver and a handful of Raspberry Pi’s. I’m going to take advantage of the fact that I already have it setup to run some commands “remotely”.

If you don’t have any remote management capabilities, I’m afraid it might be a bit more complicated to get started..

SSH tunneling

So the first step is to locally attempt to create a tunnel with the internet host. We want to open a port on the internet host (I choose 8022) that, when connected to, will in fact be connecting to my local server’s SSH port (22)

Proof of concept (run this on home machine):

/usr/bin/ssh -fnNT -R 8022:localhost:22 user@internethost

I’ll leave it to to.. explain the shell arguments.

With that, on the internet host I was able to do “ssh localhost -oPort=8022” and receive the prompt for my home machine!

However, this did not work remotely using “ssh internethost -oPort=8022

I had to update the “/etc/ssh/sshd_config” on the internet host with the following line:

GatewayPorts yes

See here for more:

Additionally, you might encounter connection timeouts and the tunnel drops. In which case you’ll probably have to add one of these two configurations:

# Server side; sshd_config
ClientAliveInterval 60
ClientAliveCountMax 2

# Client side, in ~/.ssh/config
Host internethost
 ServerAliveInterval 60

Restart the ssh process and tada! I’m now able to ssh into my home machine, from anywhere! And without having to think about static IPs, port forwardings, dynamic host names, etc.

ssh user@internethost -oPort=8022

At this stage, this is our access:

I then added that into the Puppet manifest, which although won’t restore the connection instantaneously in case of dropouts, should at least resume every time the agent runs (30 mins)

exec {
    command => '/usr/bin/ssh -fnNT -R 50014:localhost:22 user@internethost',
    unless => '/bin/ps -ef | /bin/grep -v grep | /bin/grep "ssh -fnNT"',
    user => 'johann',

Redirecting to a different host

So now the remote host has picked up the configuration change, we have access to the remotehost through the tunnel and it should be a bit quicker to do some testing. The aim now is to open another port on the internethost that will redirect to the remoterouter’s web port.

You might have noticed in the commands above we were using “localhost” to redirect requests to the local SSH port. By changing this to the internal IP of our remote router, we can redirect traffic to it’s web management interface.

ssh -fnNT -R 8080: user@internethost

And thus the final setup:

Now, I can remotely manage the remote router by visiting http://internethost:8080 in my web browser.

Keeping the tunnels running

I made a short script that I put to run in a crontab every few minutes:

$ crontab -l | grep Tunnel
*/4 * * * * /home/johann/tools/ 8022

$ cat tools/
#! /bin/bash
pgrep -f "ssh -fnNT" >/dev/null || /usr/bin/ssh -fnNT -R ${port}:localhost:22 johann@internethost
Posted in Linux | Leave a comment

Puppet trouble with vcsrepo module

After having tried to use the vcsrepo module for Puppet a while back and it not working, then trying tonight and it still doesn’t work; no kind of message even in debug to help understand what’s going on, I finally remembered something form having used another module previously:

# /etc/puppet/puppet.conf
pluginsync = true

And that’s it!

The module simply has a plugin included, and if you don’t sync it, the module won’t run.




Posted in Linux | Tagged , , , , | Leave a comment

Javascript unexpected errors

Today while working on a mini-project, I encountered some problems with JavaScript, notably:

Unexpected end of input
Unexpected token illegal

Oddly, the source file had barely changed (new line, for example) and there was no apparent error. Also, sometimes the file received seen through Chrome Developer Tools didn’t reflect the changes made on the source.

After searching around for a bit, I found this post in StackOverflow which brings to this post, where it appears to be a problem with VirtualBox and its use of the sendfile() function.

The problem appears when using shared folders through VirtualBox, and the fix is to add the following directive to your site description:

For nginx:

sendfile off;

For Apache:

EnableSendfile Off

Extract from the Apache docs:

This directive controls whether httpd may use the sendfile support from the kernel to transmit file contents to the client. By default, when the handling of a request requires no access to the data within a file — for example, when delivering a static file — Apache uses sendfile to deliver the file contents without ever reading the file if the OS supports it.

This sendfile mechanism avoids separate read and send operations, and buffer allocations. But on some platforms or within some filesystems, it is better to disable this feature to avoid operational problems:

  • Some platforms may have broken sendfile support that the build system did not detect, especially if the binaries were built on another box and moved to such a machine with broken sendfile support.
  • On Linux the use of sendfile triggers TCP-checksum offloading bugs on certain networking cards when using IPv6.
  • On Linux on Itanium, sendfile may be unable to handle files over 2GB in size.
  • With a network-mounted DocumentRoot (e.g., NFS or SMB), the kernel may be unable to serve the network file through its own cache.

For server configurations that are vulnerable to these problems, you should disable this feature by specifying:

EnableSendfile Off

For NFS or SMB mounted files, this feature may be disabled explicitly for the offending files by specifying:

<Directory "/path-to-nfs-files">EnableSendfile Off</Directory>

Please note that the per-directory and .htaccess configuration of EnableSendfile is not supported by mod_disk_cache. Only global definition of EnableSendfile is taken into account by the module.

Nginx also has reference to this:

Posted in WebDev | Tagged , , | Leave a comment

Disk passthrough in Proxmox

The first step is to identify the disk you want to pass through. For this, there are multiple methods:

fdisk -l
ls -l /dev/disk/by-label
ls -l /dev/disk/by-uuid
ls -l /dev/disk/by-id

Note that using /dev/sdX is not a great option since the letter attribution can change, whereas a disk’s uuid will not.

Then, you’ll want to copy whatever label/uuid/id that is relevant, and to allow disk passthrough (direct access) to the VM there are now two options:

1. Through the Proxmox console

qm set {vmid} -{ide|sata|scsi}# /dev/disk/by-{label|uuid|id}/{reference}

with vmid the ID of your VM, ide or sata the type of disk you want to add.

Note that ide value can be 0-3, sata can be 0-5 and scsi 0-13. Also, ide0 is generally the boot disk and ide2 will be the CD drive; Adjust depending on your configuration.


qm set 101 -sata0 /dev/disk/by-label/data01

This will modify the {vmid}.conf file, which bring us to option 2.

2. By directly modifying the {vmid}. conf file in /etc/pve/qemu-server/

Add a line as follows:

{ide|sata|scsi}#: /dev/disk/by-{label|uuid|id}/{reference}


sata0: /dev/disk/by-label/data01


You’ll want to shutdown then boot the VM (not just a regular restart) for the changes to take effect, after which the disk should be accessible in your VM.

Posted in Linux | Tagged , , , , , , | 1 Comment

Saving RaspberryPi sound configuration

One of the things that’s been bothering me with the Raspberry Pi, even though I’m managing it through Puppet, is the sound configuration.

Sometimes the sound is x% lower for no obvious reason, othertimes it goes through HDMI when I wanted it through the 3.5mm jack. Then I have to go figure out how to set it back all over again.

Turns out, the settings are loaded from a file, located at


This means, that by changing some settings and saving again, it’s possible to see which parameters were modified, and then use this to create a template for use in Puppet.

To save the state:

alsactl store

For example, to change the output between auto/analog/hdmi, one would use the following command:

sudo amixer cset numid=3 1

which isn’t very explicit.

In the file, this will change the number corresponding to “value”:

control.3 {
	iface MIXER
	name 'PCM Playback Route'
	value 1
	comment {
		access 'read write'
		type INTEGER
		count 1
		range '0 - 2'

On a Raspberry Pi, 0=auto; 1=analog; 2=HDMI.

To modify the output volume, you can set the “Value” in the first control to it’s highest value in the range:

control.1 {
	iface MIXER
	name 'PCM Playback Volume'
	value 400
	comment {
		access 'read write'
		type INTEGER
		count 1
		range '-10239 - 400'
		dbmin -9999999
		dbmax 400
		dbvalue.0 400
Posted in Linux, Raspberry Pi, Software | Tagged , , , , , | Leave a comment