Humans are curious, perhaps that makes us humans, and you might be curious about what your program is doing.
Sometimes, the program has several threads running and sometimes you can’t completely stop it neither kill it to see what it is doing.
A situation like that could be when there is a service running on a client and it has some problem, you suspect a thread (one of several) is locked or waiting for something that will never happen, but all other threads look like they are running fine; so you don’t want to interrupt neither kill the process for now.
What I do, is to use gdb and write a file with the commands I want to run and ending the file with the ‘q’ command (quit), making gdb quit so the process can continue its execution. I usually write a file called ‘commands’ with this:
thread apply all bt
That will execute ‘bt’ (backtrace) to all threads and then ‘q’ (quit) gdb after executing backtrace. Printing the backtrace for all threads shows me (more or less) what the threads are executing.
Using the ‘commands’ file I run gdb like this: gdb -p <pid> -x commands > /tmp/threads
Being <pid> the process pid.
Notice I redirect the output to a file, that is because unless I redirect to a file, gdb will stop the output when the console is full and wait for me to press ‘return’; which will make the process stop for a while, something I don’t want to happen.
After seeing the threads, I can write another ‘commands’ files with another instructions to gdb, like printing some variable.
We have to decode many videos in multiple encodings(mpeg2video, h264, etc). The decoding proccess consumes too much CPU, and sometimes in one server we decode up to 20 Full HD videos, to make this possible we aid the CPU with either Nvidia CUDA or Intel QSV.
We distribute our software in rpm packages, which are built in our R&D department and deployed to our clients, usually this means that the hardware specifications are not the same.
These days we dealt with an interesting problem, we wanted to unify our decoding library and make it’s compilation hardware independant. CUDA didn’t pose a problem because we install the libraries in all of our systems whether they have a video card or not. However, Intel QSV support proved to be a little bit more difficult… If Intel QSV is not supported by the processor, the application raises a SIGILL (Illegal Instruction), which means that the processor tried to execute an invalid instruction, and stops the execution.
We wanted to keep one package for the library that could be built in all of our processors (whether they support Intel QSV or not). Up to now, if we wanted Intel QSV, we had to compile the library in the specific processor that supported it, and we had a bunch of conditions in our Makefiles to detect the availabity of QSV and build the library with it. Since the decoding library changes very often, new rpm packages are uploaded almost daily by us, so if the CPU doesn’t support QSV the package is built without it, and when is installed in a CPU that does support it, it doesn’t use it D:
We want to run an application that may have invalid instructions in some of the libraries it includes, and decide during execution time whether to call them or not…
So what did we do? First of all, we took away the Intel QSV decoding part, and made a new library with it, like a wrapper. You might be wondering what does this change, because our main decoding library would link the QSV library and it should throw a SIGILL same as before.
What I didn’t mention is how do we link this new library, we are not linking it as a shared object when compiling, we dynamically load it during execution. How do we do it? Using ldfcn(), this allow us to read object files, such as a shared object, and link it during execution. By linking in this way, we can check the processor during execution and link the QSV library only if supported, however we have to manually link every function we have in the library to function pointers in the main decoding library context using dlsym(). To sum up:
Divide the main decoding library and the QSV part
Build the latter as a shared object
Check in the main library when is QSV supported
Dynamically load the .so
A few considerations
We need the headers of the QSV library because they provide the definitions of the structures, so the rpm package will be installed always, whether we have QSV support or not, this way we can include the header and compile the main decoding library without errors.
It is important not to link QSV when compiling, if we do so the application will compile but when we try to run it, a SIGILL will show up (when not supported) before the main() is even executed. This happens because the library has a constructor that is called when the library is loaded.
If the unsupported library has too many API functions, we will have to link each and every one of them manually, this can be tedious .__. And take care not to load the library when not supported because you will have a SIGILL immediately
Lot of branches, therefore much disorder. To solve this we can tag and archive.
To keep the git repo tidy we usually recommend to avoid leaving open branches that have already been merged to the main branch.
How to tag a branch and then close correctly on local and remote repos..
1 - Create the tag (locally)
git tag archive/<branchname> <branchname>
2 - push the new tag to remote
git push --tags
3 - delete branch locally
git branch -d <branchname>
4 - delete branch on remote repo
git push origin --delete <branch_name>5 - go to master branch (or other branch) git checkout master
How to recover older tagged branch
1 - go to tag
git checkout archive/<branchname>
2 - recreate branch
git checkout -b new_branch_name
We make rpm packages for most of our libraries and applications, and that brings a lot of benefits. However, when you have too many packages you might (and probably will) forget which package installs a specific file.
$ rpm -qf filename
[root@videus ~]# rpm -qf /bin/bash
Then the file /bin/bash is installed by the package bash-3.0-31
This is pretty straightforward and easy to remember
One common issue while debugging, refactoring or just programming is when you are searching a word or sentence in a huge number of files and folders.
Several algorithms could be implemented but they always will reach a slow or quick reading of files. One by one until they found a match.
Fortunately GNU provide a powerful tool called ‘grep’. Basically filters the file lines searching a specific word. It uses an algorithm optimized to read files, some say that the real secret it is not to read at all.
This example will show you the matches in the file <filename>.
$ grep "foo" <filename>
Now we go a step ahead by adding some parameters to the ‘grep’ command in order to search in all the files and folders in our location. $ grep -nHr
Finally this example will show you a list of lines with the file name followed by a number of line and the corresponding line with the match.
$ grep -nHr "frequency"
test/mpeg-freq-test.c:49: struct v4l2_frequency vf;
test/mpeg-freq-test.c:55: vf.frequency = f[cnt % 2] * 16;
test/mpeg-freq-test.c:59: perror("could not set frequency");
doc/README.radio:26: -f Tune to a specific frequency
Logs are usefull for debugging and tracing what your code is doing, and what has done in the past.
I started using ‘tac’ when reading logs, specially when I need to see what happened recently. $ tac that_log.log | less
From man tac: tac - concatenate and print files in reverse
So with ‘tac’ you get the last line first. Which is nice if you don’t need to see lines from long ago.
After using tac for a while, I feel it is better to read from end to beginning, you read the error first and then what happened before. I think reading the error first makes you be more alert to read the following lines paying more attention.
If just using less to read the last line, you has to press Shift-g and wait a moment to get the last line, and sometimes with large files less hangs for a while.
Another option is to use tail -n<lines> to get the last lines right away, but usually <lines> are not enough.
This is 3 Way Solutions’s official blog focused on technology issues and geeky stuff. In here you will get to know about us in the Research and Development department, our perspectives, challenges, issues, and how we tackle them (or not). Most of our posts will be regarding TV (both analog and digital), Linux, Programming, HPC, etc and how we merge all this to create our products.
3 Way Solutions is a company that sells products and solutions (mostly hardware), but the essence of these products is the software, so the R&D dept is composed mainly of programmers. All of our systems are based on GNU/Linux, and some of our main programming languages are C, C++, Perl, Bash scripts, and more. Our products are intended mainly for broadcast, cable, professional video, and goverment focused on TV Recording, Content detection, Media Monitoring, Content Repurpouse, compliance, QoS and QoE monitoring.
All of our blog entries will be authored by our developers and we will try to keep them in English (we are based in Argentina, so our first language is Spanish, sorry in advance for our English n.nb). We will share, of course without giving our top secrets, tips, tricks, sample scripts and apps, different approaches, and issues we face, hoping to enrich the community and looking for fellow developers perspectives and opinions. Thank you for reading and we hope you like and participate in our posts.