Embedded Linux update


I don’t think I need to explain why possibility of system update is important. Nearly every day I read about another device filled with security holes.

While bugs are inevitable – every system has some. It is extremely important to be able to fix them.

While some (most maybe) devices are containing full blown Linux distribution, with pacman/apt support, there are also devices working on minimal distribution built using Yocto or Buildroot. These distributions are working more close to the metal, without resources required to run decent package manager.

Update process itself is little different comparing to regular Linux machine. User usually cannot supervise it. He cant make decisions.

Firmware update contents are under full control of device maker. It is the manufacturer who decides what will be updated.

It is devices makers responsibility to ensure that update won’t „brick” the device.

For who

I expect intermediate knowledge about Linux and Yocto build system.

I am using Yocto, but solution can be used on any NAND-based embedded Linux project.


I am using custom board based on Atmel’s SAMA5. You can use this article to build update system for any U-boot/NAND based board.

I will port this solution to NXP SOM’s devices in future.



Yocto is a widely used system for building complete embedded Linux images.

Unfortunately, learning curve is extremely steep. While documentation is rich and detailed, there are number of „gotchas”, which are making development process painful.

Device tree

ARM-based devices does not have a BIOS. There are no system which could tell Linux Kernel about hardware.

That is why Device Tree was introduced.

Device tree is a binary encoded hardware description. Linux kernel parse and analyze device tree.

Device tree usually is loaded in to RAM by bootloader, then bootloader loads and starts Linux Kernel.

NAND flash

While most of today using embedded system are using miniSD card or some kind of eMMC chip. There are some using NAND flash.

MiniSD and eMMC can be described as NAND with additional controller – specialized chip that can balance NAND flash sectors wearing. Also controller is able to detect bad sectors.

Because NAND flash has some special properties – ordinary file system cannot be used on them, that is why specialized file systems was invented (UBI, YAFFS and more)

NAND partitions

NAND flash can be divided into to partitions, similar to hard disk. But unlike hard disk – NAND does not contain partition table.

NAND partitions are defined using Device Tree or command line parameters.


UBIFS is relatively new file system. Designed to run on top of UBI image.
UBI (unsorted block image) is a layer placed on top of NAND partition.

UBI can be described as image that can contain many volumes.

UBI is much safer then raw NAND flash. It balances writes, and is able to detect bad sectors and restore its data. That is why it is safe to store volume table inside UBI.


Bootloader is a program thats job is to start operating system.

Bootloaders can be divided in to:

  • First stage – those one initializes hardware and loads second stage bootloader.
  • Second stage, these job is to start operating system.


Bootstrap is a first stage bootloader made by Atmel.

It’s job is to initialize hardware, and load second stage bootloader.


U-boot is a popular second stage bootloader.

Many hardware vendors are creating patches, to U-boot could support their chips.

U-boot has command-line interface. User can modify default behavior by modifying environment variables.

Environment variables are stored in NAND partitions. Usually these partitions are redundant. Even if one is damaged, second one should allow device boot.

U-boot default environment variables, location of environment variables partitions(s), NAND partitions layout etc. are compiled-in into U-boot binary.

In simplest scenario, U-boot loads kernel and device tree from NAND partitions into memory. Then starts kernel.


FIT is a single-file format supported by U-boot that can be used to store both kernel and device tree.

FIT also can be signed, U-boot will not run image without proper signature (optional).

Default Atmel SAMA5D3 NAND layout

Lets analyze Atmel SAMA5D3 default boot process.

Here is NAND layout:

|             Bootstrap                |
|              U-boot                  |
|  U-boot environment variables 1      |
|  U-boot environment variables 2      |
|            Device tree               |
|              Kernel                  |
|         UBI/UBIFS rootfs             |

Boot process

* Bootstrap initializes hardware
* Bootstrap loads and starts U-boot.
* U-boot starts, 
    * Loads its environments variables from NAND partition (or uses defaults, if no valid environment variables partitions has been found).
    * Executes /bootcmd/ environment variable commands which loads device tree and kernel into RAM.
    * U-boot starts Linux kernel using comand-line parameters defined in /bootargs/ environment variable 


My goal is to create redundant banks for kernel and root.

The bank is a pair of volumes. One for kernel, second for root.

There are two banks. Only one is in use. The other one is used during update process.

Update script writes kernel/rootfs images in to free bank volumes.

Then, U-Boot environment variables are modified, so on the next reboot, bootloader will start Linux using different bank.

Device configuration after modifications

This is how NAND/UBI partitions and volumes is going to look.

|             Bootstrap                |
|              U-boot                  |
|  U-boot environment variables 1      |
|  U-boot environment variables 2      |
|               UBI                    |
| +----------------------------------+ |
| |      Kernel 1 (FIT)  (Bank 1)    | |
| +----------------------------------+ |
| |      Kernel 2 (FIT)  (Bank 2)    | |
| +----------------------------------+ |
| |     Rootfs 1 (UBIFS) (Bank 1)    | |
| +----------------------------------+ |
| |     Rootfs 2 (UBIFS) (Bank 2)    | |
| +----------------------------------+ |
| |            Data (UBIFS)          | |
| +----------------------------------+ |

Modifications required

Overview of modifications required to implement


NAND layout has to be changed.
The problem is – there are more then one place where NAND partitions are defined. All of those definitions has to modified.


U-Boot requires full information about NAND flash partitions layout. Also it has to know UBI volumes labels for kernel and rootfs.

Additionally, U-Boot has to be configured to support FIT files.

All modifications requires rather serious U-boot code modifications.

I suggest to fork U-boot git repository, and make changes.

Changing git repository locations requires altering Yocto recipe behavior by creating append file (.bbappend)

UBI volumes

By default UBI image with single volume is created.

Image creation process is implemented in image_types.bbclass file. Unfortunately modifying behavior defined in .bbclass file is not as easy as .bbrecipe. Recipes have simple inheritance mechanism, classes are not. Luckily there is a hack we can use to get the work done.

To create UBI image containing five volumes original image_types.bbclass file has to be copied and changed.

UBI image should contain five volumes. Each volume is created using UBIFS image file or raw image.


Kernel can remain unchanged. But Yocto has to be instructed to generate FIT file containing both Kernel and device tree.


Rootfs should contain additional software packages:

  • UBI volumes manipulation
  • NAND partitions manipulation
  • U-Boot environment variables manipulation

These packages needs to be added to image,


Rootfs should be read only. Usually we need some kind of read/write storage. That is why another UBI volume is needed. DATA volume should contain UBIFS file system, and has to be mounted a boot time.


Bootstrap may be left unchanged.

Lets get to work

Yocto configuration

Custom layer

To customize way some Yocto recipes work we need custom layer.

Lets call it meta-arek. Create folder named ‚meta-arek’ next to other layers in yocto directory.

You should know how to create ‚build’ directory.

Inside build/conf directory find, and open bblayers.conf file. Add path to meta-arek layer directory to BBLAYERS variable.

Path to meta-arek should be first on list. I’ll explain later why.

Link to Yocto documentation:


UBI image and volumes

We need to change image creation process. The code that creates images can be found in meta/classes/image_types.bbclass file. Unfortunately we cant override behavior of .bbclass files. That is why I’m using little hack.

Copy file to meta-arek/classes directory. Because ‚meta-arek’ layer directory is higher then ‚meta’, it’s files will be preferred by bitbake.

Now, lets modify meta-arek/classes/image_types.bbclass file

multiubi_mkfs() {                                                                                 
    local mkubifs_args="$1"                                                                       
    local ubinize_args="$2"                                                                       
    if [ -z "$3" ]; then                                                                          
        local vname=""                                                                            
        local vname="_$3"                                                                         

    echo -n > ubinize${vname}.cfg                                                                 

    echo \[kernel1\] >> ubinize${vname}.cfg                                                       
    echo mode=ubi >> ubinize${vname}.cfg                                                          
    echo image=${DEPLOY_DIR_IMAGE}/fitImage >> ubinize${vname}.cfg                                
    echo vol_id=1 >> ubinize${vname}.cfg                                                          
    echo vol_type=dynamic >> ubinize${vname}.cfg                                                  
    echo vol_name=kernel1 >> ubinize${vname}.cfg                                                  
    echo vol_size=10MiB >> ubinize${vname}.cfg                                                    

    echo \[kernel2\] >> ubinize${vname}.cfg                                                       
    echo mode=ubi >> ubinize${vname}.cfg                                                          
    echo image=${DEPLOY_DIR_IMAGE}/fitImage >> ubinize${vname}.cfg                                
    echo vol_id=2 >> ubinize${vname}.cfg                                                          
    echo vol_type=dynamic >> ubinize${vname}.cfg                                                  
    echo vol_name=kernel2 >> ubinize${vname}.cfg                                                  
    echo vol_size=10MiB >> ubinize${vname}.cfg                                                    

    echo \[root1\] >> ubinize${vname}.cfg                                                         
    echo mode=ubi >> ubinize${vname}.cfg                                                          
    echo image=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}.rootfs.ubifs >> ubinize${vname}.cfg      
    echo vol_id=3 >> ubinize${vname}.cfg                                                          
    echo vol_type=dynamic >> ubinize${vname}.cfg                                                  
    echo vol_name=root1 >> ubinize${vname}.cfg                                                    
    echo vol_size=30MiB >> ubinize${vname}.cfg                                                    

    echo \[root2\] >> ubinize${vname}.cfg                                                         
    echo mode=ubi >> ubinize${vname}.cfg                                                          
    echo image=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}.rootfs.ubifs >> ubinize${vname}.cfg      
    echo vol_id=4 >> ubinize${vname}.cfg                                                          
    echo vol_type=dynamic >> ubinize${vname}.cfg                                                  
    echo vol_name=root2 >> ubinize${vname}.cfg                                                    
    echo vol_size=30MiB >> ubinize${vname}.cfg                                                    

    echo \[data\] >> ubinize${vname}.cfg                                                          
    echo mode=ubi >> ubinize${vname}.cfg                                                          
    echo vol_id=5 >> ubinize${vname}.cfg                                                          
    echo image=${DEPLOY_DIR_IMAGE}/empty.ubifs >> ubinize${vname}.cfg                             
    echo vol_type=dynamic >> ubinize${vname}.cfg                                                  
    echo vol_name=data >> ubinize${vname}.cfg                                                     
    echo vol_size=30MiB >> ubinize${vname}.cfg                                                    
    echo vol_flags=autoresize >> ubinize${vname}.cfg                                              

    rm -rf ${DEPLOY_DIR_IMAGE}/empty/*                                                            
    mkdir -p ${DEPLOY_DIR_IMAGE}/empty                                                            

    mkfs.ubifs -r ${DEPLOY_DIR_IMAGE}/empty -o ${DEPLOY_DIR_IMAGE}/empty.ubifs ${mkubifs_args}              

    mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}.rootfs.ubifs ${mkubifs_args} 
    ubinize -o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}.rootfs.ubi ${ubinize_args} ubinize${vname}.cfg     

    # Cleanup cfg file                                                                                      
    mv ubinize${vname}.cfg ${DEPLOY_DIR_IMAGE}/                                                             

    # Create own symlinks for 'named' volumes                                                               
    if [ -n "$vname" ]; then                                                                                
        cd ${DEPLOY_DIR_IMAGE}                                                                              
        if [ -e ${IMAGE_NAME}${vname}.rootfs.ubifs ]; then                                                  
            ln -sf ${IMAGE_NAME}${vname}.rootfs.ubifs \                                                     
        if [ -e ${IMAGE_NAME}${vname}.rootfs.ubi ]; then                                                    
            ln -sf ${IMAGE_NAME}${vname}.rootfs.ubi \                                                       
        cd -                                                                                                

As you see, we have changed way file ubinize.cfg is generated. Ubinize.cfg file is used by ‚ubinize’ tool, which creating UBI image. Ubuinize.cfg contains definitions of UBI volumes, and some NAND specific configuration values.


I spent couple of days, searching for simple solution to modify U-boot. I did not found any.
Atmel’s manual for U-boot setup is terribly outdated. So I have to found solution myself.

U-boot configuration is probably hardest part. We have to modify U-boot source files.
That is why I recommend to fork U-boot repository, make changes in forked version, then change repository URI using by Bitbake to fetch U-boot source code.

If you forked U-boot repository, we can start work.

Open arch/arm/mach-at91/Kconfig file.


    bool "SAMA5D3 Xplained board"
    select CPU_V7
    select SUPPORT_SPL

And, at the end of arch/arm/mach-at91/Kconfig file:

source "board/atmel/sama5d3_arek/Kconfig"

Create file board/atmel/sama5d3_arek/Kconfig


config SYS_BOARD
    default "sama5d3_xplained"

    default "atmel"

    default "sama5d3_arek"


Create file board/atmel/sama5d3_arek/MAINTAINERS

M:  Arek Marud <a.marud@post.pl>
S:  Maintained
F:  board/atmel/sama5d3_arek/
F:  include/configs/sama5d3_arek.h
F:  configs/sama5d3_arek_defconfig
F:  configs/sama5d3_arek_defconfig

Create file configs/sama5d3_arek_defconfig


Create file include/configs/sama5d3_arek.h

#include "sama5d3_xplained.h"
#define MTDIDS_DEFAULT "nand0=nand_flash"
#define MTDPARTS_DEFAULT "mtdparts=nand_flash:256k(bootstrap)ro,512k(uboot)ro,256k(env1),256k(env2),-(sys)"     
#define CONFIG_BOOTARGS "ubi.mtd=4 root=ubi0:root1 rootfstype=ubifs console=ttyS0,115200 earlyprintk mtdparts=atmel_nand:256k(bootstrap)ro,512k(uboot)ro,256k(env1),256k(env2),-(sys)"
#define CONFIG_BOOTCOMMAND "mtdparts default;ubi part sys;ubi read 0x22000000 kernel1;bootm 0x22000000#conf@1"

That’s it.

Please notice contents of file ‚include/configs/same5d3_arek.h’, MTDPARTS_DEFAULT describes NAND partitions layout for U-boot. CONFIG_BOOTARGS environment variable defines kernel command line parameters, and NAND partitions.
Finally CONFIG_BOOTCOMMAND variable stores command for kernel start.

U-boot Yocto changes

To make our changes work, we have to change git repository URI.

Create file ‚meta-arek/recipes-bsp/u-boot/u-boot-at91_git.bbappend’.

Add single line:


Of course you have to enter URI for your repository.


Default settings for image are stored in machine file. Let’s create one.

First we need to copy all required include files.

Copy files:


to meta-arek/conf/machine/include directory.

Files can be found inside meta-atmel directory.

Create file meta-arek/conf/machine/mymachine.conf

require include/sama5d3.inc                                                              

MACHINE_FEATURES = "kernel26 apm ext2 ext3 usbhost usbgadget camera ppp wifi iptables"   
# used by sysvinit_2                                                                     
SERIAL_CONSOLES ?= "115200;ttyS0"                                                        

ROOT_FLASH_SIZE = "256"                                                                  
IMAGE_FSTYPES += " ubi tar.gz"                                                           

# NAND                                                                                   
MKUBIFS_ARGS ?= " -e 0x1f000 -c 2048 -m 0x800  -x lzo"                                   
UBINIZE_ARGS ?= " -m 0x800 -p 0x20000 -s 2048"                                           

UBI_VOLNAME = "rootfs"                                                                   

UBOOT_MACHINE ?= "sama5d3_arek_config"                                               
UBOOT_ENTRYPOINT = "0x20008000"                                                          
UBOOT_LOADADDRESS = "0x20008000"                                                         

AT91BOOTSTRAP_MACHINE ?= "sama5d3_xplained"                                              

PREFERRED_PROVIDER_virtual/kernel = "linux-at91"                                         
PREFERRED_VERSION_linux-at91= "4.%"                                                      

KERNEL_CLASSES += "kernel-fitimage"

Last two lines forces FIT file creation. FIT file will be used during UBI image creation.

Test build

bitbake core-image-minimal

Directory build/tmp/deploy/images/mymachine should be populated


It is time to flash files in to NAND flash memory.

Do to that, Atemel’s SAM-BA flashing tool is required. Download and unpack sam-ba. Find directory with sam-ba_64 file.

Create flash.tcl file:

global target                                    
puts "=== Initialize the NAND access ==="        
puts "=== Erase the NAND access ==="             
puts "=== Send SPL ==="                          
NANDFLASH::SendBootFileCmd "at91bootstrap.bin"   
puts "=== Send u-boot.bin ==="                   
send_file {NandFlash} "u-boot.bin" 0x40000 0     
puts "=== Send rootfs ==="                       
send_file {NandFlash} "rootfs.ubi" 0x00140000 0  
puts "=== DONE ==="                              

Create flash.sh file:


./sam-ba_64 /dev/ttyACM0 at91sama5d3x-xplained flash.tcl  

Make flash.sh executable:

chmod u+x flash.sh

Connect your SAMA5 based board to USB port.

Remember to enable NAND access mode (JP5 jumper on SAMA5D3 Xplained).

Power the device (with jumper connected). Disconnect jumper after 3-4 seconds and start ‚flash.sh’ script. Wait for flashing process to finish.

Restart your device. Linux should boot.

Linux modifications

Now, our system has to have ability to update itself. That means that Linux needs to have access to:

* U-boot environment variables. These variables has to be changed to "switch" active kernel and rootfs.
* UBI volumes. Kernel and rootfs images will be copied in to them.

Lets start with U-Boot environment variables.

First, lets add „u-boot-fw-utils” package in to the rootfs. The package contains utilities, that allows to modify U-Boot environment variables.


IMAGE_INSTALL_APPEND += " u-boot-fw-utils"

The package „u-boot-fw-utils” is built using main U-Boot source repository. Because we changed U-Boot repository to different server, it is recommended to do this also for „u-boot-fw-utils” package.

To modify git repository address, create file u-boot-fw-utils_2015.07.bbappend in directory meta-arek/recipes-bsp/u-boot

SRCREV = "<git comit hash>"
LIC_FILES_CHKSUM = "file://Licenses/README;md5=a2c678cfd4a4d97135585cad908541c6"


Unfortunately u-boot-fw-utils_2015.07.bbrecipe file is using SRCREV parameter. I did not found the way to „nullify” SRCREV in bbappend file. So, the value has to be changed each time some significant commit was pushed in to git repository.

What SRCREV does, is to force usage of specified commit instead the last one.

SRC_URI should be the same like u-boot-at91_git.bbappend file.

OK. So now we have tool to manipulate U-Boot environment variables installed. Now we need to configure it.

Program „u-boot-fw-utils” requires information about placement of U-Boot environment variables. Environment variables are stored in MTD partitions. Because we added MTD partitions layout information to the kernel command line, Kernel should create device files for each partition. Files are named /dev/mtd1 /dev/mtd2 and so on.

Configuration for u-boot-fw-utils is stored in /etc/fw_env.config file.

Here is valid configuration for MTD layout:

/dev/mtd2   0x0   0x20000 0x20000 1
/dev/mtd3   0x0   0x20000 0x20000 1

Default content for /etc/fw_env.config file is stored in U-Boot repository, it has to be changed.
Easiest way to do it, is to modify U-Boot repository. Find default file, and modify it.

Unfortunately because git repository was changed, SRCREV in u-boot-fw-utils_2015.07.bbappend file has to be updated. Change SRCREV value to valid commit hash.

UBI volumes manipulation

One of the greatest Linux strengths is the „everything is a file” philosophy. Each hard disk, partition on that disk has its own device file.

That rule applies also to UBI volumes – where kernel and rootfs are stored.

Essentially, during update process rootfs and kernel image files are copied in to UBI volume device file.
But UBI volumes are not ordinary Linux block device, ‚dd’ command cannot be used. We need tool for this task.

Package „mtd-utils-ubifs” contains everything we need.

To add „mtd-utils-ubifs” to image, add:

IMAGE_INSTALL_APPEND += " mtd-utils-ubifs"

to local.config

Now, command ubiupdatevol can be used. To copy kernel image file to /dev/ubi0_1 UBI volume device file try command:

ubiupdatevol /dev/ubi0_1 kernel

Now we have all tools we need to update Linux.

There are, however few scripts that needs to be created.

Create update package file

The easiest way to do that, is to create .tar.gz file containing rootfs and kernel files.

Unpack update package

I think best location is somewhere in /tmp directory.

Determine currently used UBI volumes.

This can be done by parsing U-Boot environment variables, or kernel command line parameters contained in /proc/ cmdline.

Overwrite UBI volumes

Using ubiupdatevol, copy contents of kernel and rootfs files in to UBI volumes.

Modify bootloader

Use fw_setenv command to modify bootloader environment variables.

Variable „bootcmd” contains kernel location.
Variable „bootargs” contains rootfs location.

Unfortunately we need to change two variables. That means that „switch” will not be an atomic operation.

Mount „data” volume

Rootfs contents should not be changed. Best solution would be read-only rootfs, I tried that, but number of problems that needs to be fixes is huge.
System settings directory – /etc/ can be mounted using unionfs over tmpfs volume. But some services (dropbear for example) are changing files in other directories.

Anyway, all changes made on rootfs will be lost after update. That is why we need a place where modified files (configuration data for example) will be stored. There is UBI volume named ‚data’ for that purpose.

We can:
* Mount contents of ‚data’ volume in /mnt/data directory
* Mount unionfs over /etc and /mnt/data/etc directories to merge them. All modifications made on /etc will be stored in ‚data’ volume. That means that all settings will remain unchanged after update process.

Unfortunately /etc/fstab can’t be used to mount UBI volume. To mount volume create script that will be executed during boot time, and mount volume ‚by hand’ using ‚mount’ command.

Firmware update file

Easiest way to make firmware update package, is to place files named rootfs and kernel inside tar.gz file.

Archive can be extracted in to /tmp directory (/tmp in Yocto minimal image is mounted using tmpfs, it is basically a ramdisk).

I suggest to sign package using OpenSSL private key. Add public key to your rootfs, and verify update file before performing any modifications.


At least one script is required to perform update process.

Script should:

* Verify package file against public key (optionally)
* Extract update package archive
* Check rootfs and kernel files existence
* Check currently used UBI volumes (read U-Boot environment variables using fw_printenv command)
* Use ubiupdatevol to modify UBI volumes
* Use fw_setenv command to modify U-Boot environment variables, to use new kernel and rootfs locations.

Yocto ERROR: QA issue : package XXXX rdepends on XXXX-dev

I am working on embedded Linux application for months. Usually I’m using QT Creator’s „deploy” feature to run tests, or start whole application on remote device.

Sometimes I’m rebuilding whole Yocto image. Just to be sure, that everything is all right.

Couple of weeks ago I decided to divide my application in number of shared object (.so) files.

My QtCreator deploy workflow worked perfectly, but when I tried to build whole image, bitbake returned weird error:

ERROR: QA issue : package XXXX rdepends on XXXX-dev

I have no idea about „-dev” package. After some googling, I found quick and dirty hack.

Add to recipe:

INSANE_SKIP_${PN} += "dev-so"

Build image:

bitbake core-image-minimal

And working image is ready.

But image size is much larger then I expect.

After some investigation, apparently Yocto decided to add all kernel header files to image. That’s bad. Every megabyte is precious on embedded systems.

So, I had to find better solution.

After digging through documentation, finally i understand what the problem is.

Yocto (or OpenEmbedded) is checking package binary files. When .so file(s) are detected, it checks for symbolic links to those files. Symlinks are used to deal with different .so versions.

Check this article for details: http://www.tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html

When symlinks are not found, this weird and not helpful QA error is generated.

Problem fixing is trivial.

I’m using CMake. All I have to, is do add this line to my library CMakeLists.txt file:

set_target_properties (libname  PROPERTIES VERSION 1.0 SOVERSION 1)

That’s it. CMake will generate symlinks.

QuickTip – QTCreator – debugging signed android application

While working on Android application using QTCreator, i have got strange debugger error.

Application was compiled, signed, and properly deployed on my test Android phone. But debugger did not want to start.

Logs shows repeated lines:

I/Qt JAVA (23201): DEBUGGER: go to sleep
I/Qt JAVA (23201): DEBUGGER: Waiting for debug socket connect

Google wasn’t helpful. But after long hours of tests, i found a solution.

Looks like QTCreator handle Android’s manifest „debuggable” flag by itself. But flag is not generated, when application is signed.

If you want to debug signed application. Add android:debuggable=”true” flag into manifest’s tag.

OpenGL varying variables explained

Couple of months ago I posted a article about circle rendering using OpengES 2.

I have to admit, back then I did not understand varying variables fully.

Now I have much better understanding.

OK, let’s analyze simple vertex shader

uniform mat4 mvp_matrix;
uniform vec4 u_color;
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
gl_Position = mvp_matrix * a_position;
v_texcoord = a_texcoord;

First, we have two uniform variables. Uniforms are set during runtime by main application.

Second, two variables marked as attribute.

Values of attribute variables are fetched from buffer. Buffer have to be filled with vertex/texture coordinates, plus OpenGL have to be informed about buffer data format.

Finally a single variable marked varying.

Couple of lines later varying variable have assigned value – it’s a copy of value stored inside a_texcoords attribute.

Now, lets look at fragment shader.

uniform sampler2D texture;

varying vec2 v_texcoord;

void main()
  gl_FragColor = texture2D(texture, v_texcoord);

gl_FragColor is a build-in OpenGL variable. Fragment color is determinated by value assigned to that variable.

texture2s(texture, v_textcoords) function calculates color value using current texture and texture coordinates.

But how texture coordinates are calculated? Only data we have, are UV-s stored inside buffer with vertex coordinates.

First, remember how shaders work. To render a triangle:

  • Vertex shader is executed for each vertex
  • Fragment shader is executed for each pixel

So, varying variable have value assigned three times (one time for each vertex shader)

Now, OpenGL calculates varying variable value for each fragment shader. Obviously varying variable value have to be different for each fragment shader call.

Calculation takes values of varying variable generated by vertex shader, and relative position of fragment.

Exact formula is not important. Can by googled. The point is – value of fragment shader varying variable is calculated – calculation formula is using fragment (relative) position, and varying values calculated by vertex shader.

Yocto, kernel Customization.

Playing with Yocto projects, there is a fair chance that, you will need to configure Linux kernel. In my project, I had to add overlay files system to embedded Linux kernel.

Let’s begin:

bitbake virtual/kernel -c menuconfig

Kernel configuration menu appears. Navigate to File system, enter inside, select Overlayfs and press y. Save changes, and exit menu.

Note, that menuconfig tries to allocate new terminal, if you using ssh for example, this could fail.

Now, build image:

bitbake core-image-minimal

Trivial right?


There is fair of chance, that generated kernel does not contain modifications.

Kernel configuration is saved in .config file. Unfortunately bitbake is unable to detect file change, and does not start kernel recompilation.

Let’s force kernel recompilation then:

bitbake virtual/kernel -c compile -f

and now:

bitbake core-image-minimal

Now, kernel should contain changes.

Unfortunately .config file gets deleted quite often. For example:

  • bitbake virtual/kernel -c clean
  • bitbake virtual/kernel -c cleanall (removing also kernel sources)
  • Not sure about downloading new version of kernel

So be prepared, and check your kernel configuration often.

You may want to add some features as kernel-module, not directly compiled into kernel.

Again, start confguration menu:

bitbake virtual/kernel -c menuconfig

Inside kernel configuration menu, press m on overlayfs, instead y. Save, close, and:

bitbake core-image-minimal 

Now, module is preassumbly compiled, but not present in the image. It’s because none of recipes includes kernel-module. One of the receipes should contain fallowing lines:

RDEPENDS_${PN} += " kernel-modules"
RDEPENDS_${PN} += " kernel-module-overlay"

And then:

bitbake virtual/kernel -c compile -f
bitbake core-image-minimal

Search inside subdirectories inside tmp directory. Find rootfs dir, lib dir should contain modules directory.

The good thing of „module” way, is that, whenever .config file will be overwriten, and module will not be created – bitbake will complain on kernel-module-overlay line.

Story of simple web service

Couple of years ago, I developed web a service. Service contains embedded web server, and is using MongoDB as its data source. All back-end code is C++ based. Everything works perfect.

Some time later

Two years past. My service was used internally by my co-workers. But new functionality has to be added. So I look at the old code, trying to estimate work needed. In the mean time, new MongoDB driver shows up. Driver is using C++11 capabilities, and is much much better, comparing to the old one.

Dependency hell

Machine, on which my service is running, is pretty old Debian distribution. I could update to new one, but unfortunately new Debian does not contain C++11 libraries for MongoDB.

My laptop works on newest Ubuntu distribution – same story.

There are more problems , „old” mongo driver is using Boost. Old version of Boost. I probably could recompile driver with newest Boost libraries. Probably.

To rewrite or not to rewrite

So. I had two choices:

  • Install old Debian on virtual machine and add required functionalities using VirtualBox. Sinking deeper and deeper into technological dept.
  • Rewrite service for newest MongoDB driver.

I decided to try second option. What I don’t like, is another ssh-based session of tweaking server machine. And reinstalls every major libraries update.


While searching through Google, I found mention of Docker. It could be solution for my problems. So I started research.

I love it.

Not a virtual machine

Docker is often described as light-weight virtual machine. I think description is not precise, and misleading. Docker (among other things) separates process (processes). Process is running in sandbox, any interactions between host, or another docker’s hosted processes has to be defined during first run command.


Image is a snapshot of file system. It contains complete collection of files to run concrete process inside. For example. Image for MongoDb database contains very thin, but complete Linux distribution, plus all MongoDb files required to run database server. Image is read-only template, you can’t run image. Docker transforms image into Container.

One of cool features of Docker its layered structure. Let say, I want to build image containing MongoDb database server. First I have to choose base Linux distribution. Ubuntu 14.04 for example.

My image will consists two layers:

  • First – base Ubuntu layer
  • Second – MongoDb server layer. This layer will contain only CHANGED files by MongoDb installation process. So layer will be quite small

Now. I can create another image. Lets say a PostgreSQL server. As base image I’ll again use Ubuntu 14.04. Base Ubuntu image will be reused by both MongoDb and PostgreSQL images. Pretty neat, right?


Image is a read only template. Container is writable instance of image. Image to Container is like class to object. Docker can run many containers based on single image.

Container can be transformed back into image. So created image can be base for another container. And so on.

Container has it’s own file system which is (surprise, surprise) a layer over image’s file system.

Container can be created from image using ‚docker run’ command.

Single process. Usually

Docker image contains command, which will be called after container start. Usually command starts single process. But nothing stands in the way to run script which starts multiple processes.

When command returns – docker will stop the container.

Tricky DNS

By default container files:


are overlayed by relevant files of host system. Don’t be surprised when changes make in those files disappears.


Container can be transformed into image using ‚docker commit’ command. But usually images are created by Dockerfile.

Dockerfile is a recipe, contains rules and commands executed on base image leading to destination image.

Images are build using

docker build

command. Created image is placed inside local docker storage.

Order matters!

Commands are processed in the same order as they are placed in Dockerfile.


FROM ubuntu:latest

This line informs docker that images is based on latest release of Ubuntu. If image is not found locally, Docker will search and download from Docker Hub.


RUN apt-get -y update

Runs command on image.



Informs docker that image expose TCP port. Host can redirect port into another one. The way host threat exposed ports is defined by ‚docker run’ command


Copies local files into image.

COPY ./file_layer /
├── usr
│   ├── bin
│   └── local
└── var
    └── lib

Inside file_layer directory I can emulate root directory. Files are going to be copied inside image. Eventually replacing image’s original ones.


WORKDIR /var/lib

Command sets working directory for image initial command.


Entry command for Container.

ENTRTPOINT /bin/bash

Usually command starts Docker-hosted process, MongoDB for example. Before running command, docker sets working directory (WORKDDIR).

Story continues

Back to story of my web service.

I’ve updated kernel on my server. Installed Docker.

Also installed Docker on my development laptop.

Now I can create and test images on my development machine. When ready, image will be moved to production machine.

MongoDB libraries

Because Debian nor Ubuntu contains C++ drivers for MongoDb. I’m using COPY command to copy them from my development laptop into image.


Coping libraries into file structure in not enough. Libraries cache has to be rebuild using ldconfig command:

RUN ldconfig


My services is using LDAP authentication. So proper packages has to be installed:

RUN apt-get -y libldap-2.4.2

Service start

ENTRYPOINT /usr/bin/myservice


So I have my image created. No I need to transport it into destination machine. There are couple ways to do it.

  • Image can be saved as tar archive using docker export command. Tar archive can be imported into Docker using docker load command.
  • Image can be pushed into Docker Hub.
  • Image can be pushed into private repository

I like this trick:

docker save <image> | bzip2 | pv | ssh user@host 'bunzip2 | docker load'

Run it!

Image can be transformed into container, using docker run command.

docker run <image name>

Docker run command is very important. Defines behavior of the container. Single image can be run many times, each time transformed into different container. For example, using Docker there can be 10 instances of MongoDB running concurrently on single host machine. But each instance requires different hosts resources. TCP port number, for example, has to be different.

This fine-tuning of image is done by run command. I suggest you to consult docker documentation.

Happy end

My service was deployed and now it’s running. I have build automated system for build and deployment of images. Linux distribution on my server is meaningless. I can use any Linux distribution inside my images.

Runtime texture atlas generation

Texture atlas – what is it

Texture atlas is a image containing collections of sub-images. Each of these sub-images representing a texture.

Application can use multiple texture atlases.


Rendering a texture require setting it as active. This operation is time consuming. Since atlas is a single texture containing many of different sub-textures, active texture switches are much rarer. Renderer use region of texture atlas to render concrete texture.

Reducing OpenGL state changes, can give significant performance gains.

Runtime texture atlas generation

Texture atlas can be pre-generated. There are many tools to do it. Usually tool generates also some meta-data file containing coordinates of each sub-texture inside atlas.

Texture atlas can also be generated runtime. Images representing a texture are copied into single large, in-memory image.

Pros of runtime generation

Simplified development process

Images can be added to project (or resource file) immediately. In case of pre-generated texture atlas, image has to be merged in to atlas each time artist submit one.

Image size

Texture size has to be equal to power of two. If not, render process may suffer performance hit. Some OpenGL implementation do not load image at all. Image placed in texture atlas can be any size (Smaller then atlas size).


There are hundreds of different devices. Some of them differ in screen resolution, maximum texture size, etc. Building texture atlas during runtime gives opportunity to optimize textures. On low-end devices, that do not use high-resolution textures, images can be down-sized. So textures would better suited for the particular machine.

Size of texture atlas can be adapted for the machine. OpenGL function glGetIntegerv(GL_MAX_TEXTURE_SIZE …) returns maximum size of the texture. Value can be used to set atlas dimension.


Creation takes time

Obviously all images merged into texture has to be loaded into memory, and printed into single large one. Additional image resizing also takes extra time.

Implementation description

Runtime texture atlas generation process consists of two phases: * Calculating placement of each sub-texture. * „Painting” sub-textures on atlas texture.

In this post, I will describe only first phase.


Texture atlas texture must have dimensions. These values can be based on machine’s GL_MAX_TEXTURE_SIZE. The safest are 1024×1024. All devices should support these.

The canvas on which sub-textures are going to be placed is represented by rectangle:

{ x= 0, y = 0, width = 1024, hight = 1024}

Lets define a list, collection or whatever, which will contain rectangles of free space on canvas. Initially collection will contain single rectangle. The one describing whole canvas.

Divide and conquer

The canvas is just a big rectangle. Image which is going to be added to atlas is smaller rectangle. Smaller subtracted from bigger give surface that can be described as two rectangles:
Sub-texture (green background) was added. Space left (blue rectangle, and red rectangle).

Now. The collections of canvas free space has to be changed. Rectangle which represents whole canvas has to be removed. And two new rectangles has to be add.

Sub-texture coordinates has to be stored somewhere.

Lets add another sub-texture (black background).


Another two rectangles representing free space ware created (orange and white background)

Horizontal or vertical

Surface after subtraction can be divided in two ways. Horizontal or vertical.


Choose one better suited for your textures.


As you may figure it out. Algorithm is quite simple:

  • Get your sub-texture’s dimensions.
  • Find free rectangle that fits sub-texture dimensions.
  • Subtract sub-textures’s dimensions from found rectangle. The result of the subtraction will be two rectangles
  • Store sub-texture coordinates (rectangle) in separate collection.
  • Remove found rectangle from list.
  • Add two new rectangles (result of subtraction) to list.
  • If image does not fit into texture atlas, generate new texture atlas
  • Repeat for each sub-texture


Whole process of finding fitting rectangle can be optimized by sorting the list.


Leave at least 1 pixel space between sub-textures. OpenGL does not work based on pixels, but on normalized values. That means some inaccuracies may be added, and „leaks” between textures can be visible.


The result of atlas generation process, should be list of rectangles – coordinates of each sub-texture. The list has to be transformed in to OpenGL texture coordinates list.




  • Coordinate systems can be different. OpenGL’s {0,0} point lays in left, bottom corner. Qt for example, using top left.
  • Sub-texture itself can be atlas (Inception?). For example texture containing font glyphs. sub-sub-texture coordinates has to recalculated.