Reverse ssh tunnel Part 2: Going through Windows

Preface

We did a post about reverse ssh tunneling post, which is in spanish (we might redo it in english… or not), so this is part 2.

The main idea of the Part 1 was to make a reverse ssh tunnel to access remotely a server in a private network and the network staff won’t allow us to access it directly.

The issue

Let’s say we have a remote server we want to access (http or ssh) but this server is in a local network and doesn’t have internet access, we will call it Server_1 and its local IP is 172.20.1.1. Then we have our local PC, called Local_PC, which has internet connection and our public IP is 200.100.1.1. If we want to access directly to Server_1 we won’t be allowed since it doesn’t have internet connection.

Tunneling through anything

Then when we need to access the server, the client in the remote network provides us with a Teamviewer connection though a Laptop running windows, what can we do to access directly from our Local_PC running our beloved Linux?

Principle of least effort
The solution

Putty

Yes, we can use Putty to make a reverse ssh tunnel from Server_1 to Local_PC, but how do we do so?

You open up Putty in the laptop and go to Connection -> SSH -> Tunnels then you have to input the Source Port which will be Local_Pc: 200.100.1.1:<remote_port> (be sure not to use an already occupied port). Then in Destination we input 172.20.1.1:22 for ssh or 172.20.1.1:80 for http or whatever port you need. Change the checkbox from Local to Remote, since we are doing a reverse tunnel, and simply click Add. Finally go back to the Session section and connect to Local_PC via ssh and voilá!

Putty <3 (Note the “Remote” checkbox)

You will be able to access Server_1 from Local_PC simply doing:

ssh user@localhost -p <remote_port>

It is possible to Add more tunnels with a single connection, so you can tunnel an ssh connection and a http one.

htop: uptime (!)

Realizando una revisión de rutina en los equipos de un cliente, noté algo que no me había llamado la atención hasta ahora…

Al abrir nuestro amado y user-friendly htop 

Un tipico htop con mas de 40 hilos de ejecucion, se tuvo que hacer un poco de crop en la imagen

En la imagen podemos ver que junto a la no pequeña suma de 322 dias de uptime tenemos un (!). Esto parece ser que siempre estuvo, pero nunca le preste atención, al investigar un poco sobre que es, llegue al codigo del htop donde podemos ver:

static void UptimeMeter_updateValues(Meter* this, char* buffer, int len) {
   int totalseconds = Platform_getUptime();
   if (totalseconds == -1) {
      snprintf(buffer, len, "(unknown)");
      return;
   }
   int seconds = totalseconds % 60;
   int minutes = (totalseconds/60) % 60;
   int hours = (totalseconds/3600) % 24;
   int days = (totalseconds/86400);
   this->values[0] = days;
   if (days > this->total) {
      this->total = days;
   }
   char daysbuf[15];
   if (days > 100) {
      snprintf(daysbuf, sizeof(daysbuf), "%d days(!), ", days);
   } else if (days > 1) {
      snprintf(daysbuf, sizeof(daysbuf), "%d days, ", days);
   } else if (days == 1) {
      snprintf(daysbuf, sizeof(daysbuf), "1 day, ");
   } else {
      daysbuf[0] = '\0';
   }
   snprintf(buffer, len, "%s%02d:%02d:%02d", daysbuf, hours, minutes, seconds);
}

Y ahi nos damos cuenta que solamente era el htop felicitandonos(?) por haber sobrepasado el hito de 100 dias de uptime 😛

Using GCC Intrinsics (MMX, SSEx, AVX) to look for max value in array

To begin with, you shouldn’t start your new codes focusing on performance; functionality should be the key factor and consider leaving room for future improvements. But well.. after you did your job and everything is working as it should be, you might need to tweak your code a little bit to increase its performance.

efficiency
First functionality then efficiency
The problem

We want to look for the maximum value in an array, this array is composed of int16_t mono audio samples. The maximum value of the array will be the peak value during the audio interval being analysed, this peak is known as sample peak and should not be interpreted as the real peak of the audio which is the true peak (there is a very good explanation about the differences here).

A basic(?) solution

Ok, we have to look for the maximum value in an array and we focus on functionality, this is quite simple actually…

int16_t max = buff [0];
for(i = 1; i < size; i++) {
>....if(max < buff[i]) {
>....>....max = buff[i];
>....}
}

 The intrinsics

These are a series of functions which implement many MMX, SSE and AVX instructions, they are mapped directly to C functions and are also further optimized with gcc. Most of the instructions use vector operations, and you can work with 128, 256 or 512 bit vectors depending on the architecture and the compiler. There is a very detailed guide here and you can see a full list of the funcions here.

You will need to include the headers depending on what functions you want to call, or just include x86intrin.h, which will include all the available ones. Then, you will need to add the appropiate flag to the gcc compile line, in this specific case I’m using -maxv2. If you want to check the supported functions you may use the following command to list you the corresponding includes that gcc will use.

$ gcc -mavx2 -dM -E - < /dev/null | egrep "SSE|AVX"
#define __AVX__ 1
#define __AVX2__ 1
#define __SSE__ 1
#define __SSE2__ 1
#define __SSE2_MATH__ 1
#define __SSE3__ 1
#define __SSE4_1__ 1
#define __SSE4_2__ 1
#define __SSE_MATH__ 1
#define __SSSE3__ 1

The code

What I’m going to do is compare two buffers of 128 bits, one of them has the max value (initialized to 0, if there are all negative values the result will be wrong)  and the other will be the input buffer, this is done using: _mm_max_epi16() which will compare 8 values (int16_t) at a time. This is one of the reasons why the intrinsics increase the performance.

After going through the whole input buffer, I will have in maxval array the maximum value, but I don’t know the position. For the sake of the example I am doing two redundant things here. First, using _mm_shufflelo_epi16()/_mm_shufflehi_epi16() and the _mm_max_epi16()  I will compare values inside the vector and rotate them so the whole buffer has the maximum value, being 16 bit values I can’t shuffle the whole buffer so the shuffle is done in the high bits and the low bits separately.

Finally I will store the final vector in a int16_t array with _mm_store_si128() and I’ll look for the maximum inside it (I could have done this before, but I wanted to show the shuffle which might be useful if the samples were not 16 bit, and the shuffles were not partial).

int16_t find_max(int16_t* buff, int size)
{
    int16_t maxmax[8];
    int i;
    int16_t max = buff[0];

    __m128i *f8 = (__m128i*)buff;
    __m128i maxval = _mm_setzero_si128();
    __m128i maxval2 = _mm_setzero_si128();
    for (i = 0; i < size / 16; i++) {
        maxval = _mm_max_epi16(maxval, f8[i]);
    }
    maxval2 = maxval;
    for (i = 0; i < 3; i++) {
        maxval = _mm_max_epi16(maxval, _mm_shufflehi_epi16(maxval, 0x3));
        _mm_store_si128(&maxmax, maxval);
        maxval2 = _mm_max_epi16(maxval2, _mm_shufflelo_epi16(maxval2, 0x3));
        _mm_store_si128(&maxmax, maxval2);
    }
    _mm_store_si128(&maxmax, maxval);
    for(i = 0; i < 8; i++)
        if(max < maxmax[i])
            max = maxmax[i];
    return max;
}
Some numbers

I’m going to compare 3 different cases, and find the maximum value in random (pseudo random) arrays of 10000 and 1000000 samples.

  • Using an intuitive for loop as the one shown before
  • Same loop as before but with compiler optimizations
  • Using SSE instructions via intrinsics (sample code).

This table shows the total delay in us (micro seconds) that the different functions take.

Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
Elements Mode Avg. Delay [us]
10000 Default 650
10000 -O2 250
10000 Intrinsics 200
10000 Intrinsics w/-O2 30
1000000 Default 31000
1000000 -O2 13000
1000000 Intrinsics 8500
1000000 Intrinsics w/-O2 3500

As you can see, the code gets much faster with the Intrinsics, and even faster with the optimizations.

Afterwords
  • You will see an improvement in most cases, and consider that it gets better when the arrays are larger
  • It gets even better with optimizations (-O2)
  • There is still a difference with smaller arrays (10000 element ones)
  • The key factor is to look for repetitive operations in large arrays
  • You may lose some portability, as some functions may not be available in every microprocessor
  • There are some transition penalties when switching between AVX and SSE, so when mixing both this should be considered

Hope you guys liked the post, please feel free to ask any questions and if I can/know, I will answer you.


Build shared libraries with invalid instructions

Some background

We have to decode many videos in multiple encodings(mpeg2video, h264, etc). The decoding proccess consumes too much CPU, and sometimes in one server we decode up to 20 Full HD videos, to make this possible we aid the CPU with either Nvidia CUDA or Intel QSV.

We distribute our software in rpm packages, which are built in our R&D department and deployed to our clients, usually this means that the hardware specifications are not the same.

These days we dealt with an interesting problem, we wanted to unify our decoding library and make it’s compilation hardware independant. CUDA didn’t pose a problem because we install the libraries in all of our systems whether they have a video card or not. However, Intel QSV support proved to be a little bit more difficult… If Intel QSV is not supported by the processor, the application raises a SIGILL (Illegal Instruction), which means that the processor tried to execute an invalid instruction, and stops the execution.

We wanted to keep one package for the library that could be built in all of our processors (whether they support Intel QSV or not). Up to now, if we wanted Intel QSV, we had to compile the library in the specific processor that supported it, and we had a bunch of conditions in our Makefiles to detect the availabity of QSV and build the library with it. Since the decoding library changes very often, new rpm packages are uploaded almost daily by us, so if the CPU doesn’t support QSV the package is built without it, and when is installed in a CPU that does support it, it doesn’t use it D:

Challenge accepted
Wait.. what is the challenge?

We want to run an application that may have invalid instructions in some of the libraries it includes, and decide during execution time whether to call them or not…

The solution

So what did we do? First of all, we took away the Intel QSV decoding part, and made a new library with it, like a wrapper. You might be wondering what does this change, because our main decoding library would link the QSV library and it should throw a SIGILL same as before.

What I didn’t mention is how do we link this new library, we are not linking it as a shared object when compiling, we dynamically load it during execution. How do we do it? Using ldfcn(), this allow us to read object files, such as a shared object, and link it during execution. By linking in this way, we can check the processor during execution and link the QSV library only if supported, however we have to manually link every function we have in the library to function pointers in the main decoding library context using dlsym(). To sum up:

  1. Divide the main decoding library and the QSV part
  2. Build the latter as a shared object
  3. Check in the main library when is QSV supported
  4. Dynamically load the .so
spongebob
The solution
A few considerations

We need the headers of the QSV library because they provide the definitions of the structures, so the rpm package will be installed always, whether we have QSV support or not, this way we can include the header and compile the main decoding library without errors.

It is important not to link QSV when compiling, if we do so the application will compile but when we try to run it, a SIGILL will show up (when not supported) before the main() is even executed. This happens because the library has a constructor that is called when the library is loaded.

If the unsupported library has too many API functions, we will have to link each and every one of them manually, this can be tedious .__. And take care not to load the library when not supported because you will have a SIGILL immediately

See which rpm package a file belongs to

We make rpm packages for most of our libraries and applications, and that brings a lot of benefits. However, when you have too many packages you might (and probably will) forget which package installs a specific file.

$ rpm -qf filename

For example:

[root@videus ~]# rpm -qf /bin/bash
bash-3.0-31

Then the file /bin/bash is installed by the package bash-3.0-31

allyoubase
1989 Zero Wing‘s broken english 

This is pretty straightforward and easy to remember

Hello world!

This is 3 Way Solutions’s official blog focused on technology issues and geeky stuff. In here you will get to know about us in the Research and Development department, our perspectives, challenges, issues, and how we tackle them (or not). Most of our posts will be regarding TV (both analog and digital), Linux, Programming, HPC, etc and how we merge all this to create our products.

3 Way Solutions is a company that sells products and solutions (mostly hardware), but the essence of these products is the software, so the R&D dept is composed mainly of programmers. All of our systems are based on GNU/Linux, and some of our main programming languages are C, C++, Perl, Bash scripts, and more. Our products are intended mainly for broadcast, cable, professional video, and goverment focused on TV Recording, Content detection, Media Monitoring, Content Repurpouse, compliance, QoS and QoE monitoring.

All of our blog entries will be authored by our developers and we will try to keep them in English (we are based in Argentina, so our first language is Spanish, sorry in advance for our English n.nb). We will share, of course without giving our top secrets, tips, tricks, sample scripts and apps, different approaches, and issues we face, hoping to enrich the community and looking for fellow developers perspectives and opinions. Thank you for reading and we hope you like and participate in our posts.

To the infinity and beyond
To the infinity and beyond