Using Puppet’s policy-based autosigning

Handling SSL certificates is not a lot of fun, and while Puppet’s use of client certificates protects the server and all its deep, dark secrets very well from rogue clients, it also leads to a lot of frustration. In many cases, users would configure their autosign.conf to allow any (or almost any) client’s certificate to be signed automatically, which isn’t exactly great for security. Since Puppet 3.4.0, it is possible to use policy-based autosigning to have much more control over autosigning, and to do that in a much more secure manner than the old autosigning based solely on client’s hostnames.

One of the uses for this is automatically providing certificates to instances in EC2. Chris Barker wrote a nice module, based on a gist by Jeremy Bouse that uses policy-based autosigning to provide EC2 instances with certificates, based on their instance_id.

I recently got curious, and wanted to use that same mechanism but with preshared keys. Here’s a quick step-by-step guide of what I had to do:

The autosign script

When you set autosign in puppet.conf to point at a script, Puppet will call that script every time a client request a certificate with the client’s certname as the sole command line argument of the script and the CSR on stdin. If the script exits successfully, Puppet will sign the certificate, and refuse to sign it otherwise.

On the master, we’ll maintain a directory /etc/puppet/autosign/psk; files in that directory must have the certname of the client and contain the preshared key.

Here is the autosign-psk script; the OID’s for Puppet-specific certificate extensions can be found here:

#! /bin/bash

PSK_DIR=/etc/puppet/autosign/psk

csr=$(< /dev/stdin)
certname=$1

# Get the certificate extension with OID $1 from the csr
function extension {
  echo "$csr" | openssl req -noout -text | fgrep -A1 "$1" | tail -n 1 \
      | sed -e 's/^ *//;s/ *$//'
}

psk=$(extension '1.3.6.1.4.1.34380.1.1.4')

echo "autosign $1 with PSK $psk"

psk_file=$PSK_DIR/$certname
if [ -f "$psk_file" ]; then
    if grep -q "$psk" "$psk_file"; then
        exit 0
    else
        echo "File for '$psk' does not contain '$certname'"
        exit 1
    fi
else
    echo "Could not find PSK file for $certname"
    exit 1
fi

Puppet master setup

On the Puppet master, we put the above script into /usr/local/bin/autosign-psk, make it world-executable, and point autosign at it:

cp somewhere/autosign-psk /usr/local/bin
chmod a+x /usr/local/bin/autosign-psk
mkdir -p /etc/puppet/autosign/psk
puppet config set --section master autosign /usr/local/bin/autosign-psk

A PSK for client $clientname can easily be generated with

tr -cd 'a-f0-9' < /dev/urandom | head -c 32 >/etc/puppet/autosign/psk/$certname

Puppet agent setup

On the agent, we create the file /etc/puppet/csr_attributes.yaml with the PSK in it:

---
extension_requests:
  pp_preshared_key: @the_psk@

With all that in place, we can now run the Puppet agent and have it get its certificate automatically; that process is as secure as we keep the preshared key.

Posted in Uncategorized

Publishing reveal.js presentations using OpenShift

There are numerous reasons why I love creating my slides as a simple HTML page, for example, I don't want to collect emails of people that ask me to send them those, or I'm not worried that the format of my slides will not be recognized or the presentation will look terrible on someone else's machine.
The other reason is that I simply hate all presentation software, including Apple Keynote, LibreOffice Presenter or the one from Microsoft. I'm a programmer and given that I do prefer a simple and efficient approach. I hate touching the mouse, and fighting with alignment of my images or the size of fonts.
I think, personally, that slides should be simple and should not be the core of a presentation; the story should be told by the presenter and not by his slides.

There are many good HTML frameworks for making slides, such as deck.js, slippy, impress.js all heavily based on the HTML5, CSS3 and JavaScript. To be honest, I haven't tried them all, but somehow I ended up using reveal.js. For me, this framework is very simple, follows the HTML5 new semantics, does not require any initial setup and the result looks awesome.
To make your presentation in reveal.js, you don't need to be HTML5 or JavaScript wizard. If you know at least basic HTML, you should be fine. You can start with a index.html file they have in their Github repository.

For now, you can fast-forward to the <body;> section of that file and remove everything inside <div class="slide">. Each slide is represented as an HTML5 <section> element, where you put the slide content. See the examples in the original index.html for how to make your slides awesome. There are many examples from basic text to complex code highlighting. The good thing about reveal.js is it will make your slides look awesome on whatever resolution or screen. They even look awesome on mobile browsers.

Now, this blog post is not about teaching you reveal.js, but it is more about how you can publish your final presentation in OpenShift. If you sign up you get 3 free applications, which is enough for storing 3 different presentations slides. Also you can use one application to store multiple reveal.js presentations as they are just simple HTML pages.

So to start, just clone the reveal.js Github repository which includes all require libraries and assets:

git clone https://github.com/hakimel/reveal.js
      

Now, if you follow the Full setup instructions, you install NodeJS and Grunt, which you can use to start your presentation locally. We don't want to run Grunt or NodeJS in OpenShift to serve our static presentation. For that, reveal.js comes with a simple Grunt task:

grunt zip
      

This will produce an archive with just enough JavaScript and CSS files and, of course, with your index.html file. To make it online, you need to create a new application in OpenShift:

rhc app create mypreso php-5.3
      

I used a PHP application, because that one is very small and the git push command is super fast as it does not need to pull any dependencies or compile assets. You can use whatever application type you want as we will just serve static HTML files. Next step is unzipping your presentation into the OpenShift application folder and make it available online:

unzip reveal-js-presentation.zip -d ./mypreso/php/
      rm -f ./mypreso/php/index.php
      cd mypreso && git add -A && git commit -m "My preso" && git push
      

When the last command has finished, you should have your presentation available at http://mypreso-YOURDOMAIN.rhcloud.com. OpenShift supports custom domains so you can easily set up your own domain name for place where you store presentations, for example, preso.mfojtik.im.

Posted in Uncategorized

Publishing reveal.js presentations using OpenShift

There are numerous reasons why I love creating my slides as a simple HTML page, for example, I don't want to collect emails of people that ask me to send them those, or I'm not worried that the format of my slides will not be recognized or the presentation will look terrible on someone else's machine.
The other reason is that I simply hate all presentation software, including Apple Keynote, LibreOffice Presenter or the one from Microsoft. I'm a programmer and given that I do prefer a simple and efficient approach. I hate touching the mouse, and fighting with alignment of my images or the size of fonts.
I think, personally, that slides should be simple and should not be the core of a presentation; the story should be told by the presenter and not by his slides.

There are many good HTML frameworks for making slides, such as deck.js, slippy, impress.js all heavily based on the HTML5, CSS3 and JavaScript. To be honest, I haven't tried them all, but somehow I ended up using reveal.js. For me, this framework is very simple, follows the HTML5 new semantics, does not require any initial setup and the result looks awesome.
To make your presentation in reveal.js, you don't need to be HTML5 or JavaScript wizard. If you know at least basic HTML, you should be fine. You can start with a index.html file they have in their Github repository.

For now, you can fast-forward to the <body;> section of that file and remove everything inside <div class="slide">. Each slide is represented as an HTML5 <section> element, where you put the slide content. See the examples in the original index.html for how to make your slides awesome. There are many examples from basic text to complex code highlighting. The good thing about reveal.js is it will make your slides look awesome on whatever resolution or screen. They even look awesome on mobile browsers.

Now, this blog post is not about teaching you reveal.js, but it is more about how you can publish your final presentation in OpenShift. If you sign up you get 3 free applications, which is enough for storing 3 different presentations slides. Also you can use one application to store multiple reveal.js presentations as they are just simple HTML pages.

So to start, just clone the reveal.js Github repository which includes all require libraries and assets:

git clone https://github.com/hakimel/reveal.js
      

Now, if you follow the Full setup instructions, you install NodeJS and Grunt, which you can use to start your presentation locally. We don't want to run Grunt or NodeJS in OpenShift to serve our static presentation. For that, reveal.js comes with a simple Grunt task:

grunt zip
      

This will produce an archive with just enough JavaScript and CSS files and, of course, with your index.html file. To make it online, you need to create a new application in OpenShift:

rhc app create mypreso php-5.3
      

I used a PHP application, because that one is very small and the git push command is super fast as it does not need to pull any dependencies or compile assets. You can use whatever application type you want as we will just serve static HTML files. Next step is unzipping your presentation into the OpenShift application folder and make it available online:

unzip reveal-js-presentation.zip -d ./mypreso/php/
      rm -f ./mypreso/php/index.php
      cd mypreso && git add -A && git commit -m "My preso" && git push
      

When the last command has finished, you should have your presentation available at http://mypreso-YOURDOMAIN.rhcloud.com. OpenShift supports custom domains so you can easily set up your own domain name for place where you store presentations, for example, preso.mfojtik.im.

Posted in Uncategorized

Developing STM32 microcontroller code on Linux (Part 7 of 8, building and running a simple STM32 program)

The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. The sixth post covered linker scripts and command-line options necessary for building and linking programs to run on the STM32. This post will cover building and running a program on the STM32.

In the previous posts we dealt with all of the set up necessary to build programs for the STM32. It is finally time to take advantage of all of those tools and build and run something. Recall that from previous posts, we already have an OpenOCD configuration file setup, a linker script setup, and a Makefile setup. All that really remains is for us to write the code, build it, and flash it to our device. The code below is very STM32F3DISCOVERY specific; that is, it very much requires that the GPIO for the LED be on GPIO bank E, pin 12 on the board. If you have one of the other STM32 DISCOVERY boards, you'll need to look at the schematics and find one of the GPIOs that are hooked to an LED.

We are going to take an extremely simple example from libopencm3. This example does nothing more than blink one of the LEDs on the board on and off continuously. While this is simple, it will validate that everything that we've done before is actually correct.

Here is the code:

$ cd ~/stm32-project
$ cat <<EOF > tut.c
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>

static void gpio_setup(void)
{
/* Enable GPIOE clock. */
rcc_peripheral_enable_clock(&RCC_AHBENR, RCC_AHBENR_IOPEEN);

/* Set GPIO12 (in GPIO port E) to 'output push-pull'. */
gpio_mode_setup(GPIOE, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE,
GPIO12);
}

int main(void)
{
int i;

gpio_setup();

/* Blink the LED (PC8) on the board. */
while (1) {
/* Using API function gpio_toggle(): */
gpio_toggle(GPIOE, GPIO12); /* LED on/off */
for (i = 0; i < 2000000; i++) /* Wait a bit. */
__asm__("nop");
}

return 0;
}
EOF
You should now be able to type "make", and the thing should build. Typing "make flash" should run OpenOCD, install the program to the board, and start blinking an LED. Remember that our Makefile required sudo access to actually run openocd. If you don't have sudo access, you can either add sudo access (by adding your user to the wheel group), or just su to root and run the openocd command by hand.

Developing STM32 microcontroller code on Linux (Part 7 of 8, building and running a simple STM32 program)

The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. The sixth post covered linker scripts and command-line options necessary for building and linking programs to run on the STM32. This post will cover building and running a program on the STM32.

In the previous posts we dealt with all of the set up necessary to build programs for the STM32. It is finally time to take advantage of all of those tools and build and run something. Recall that from previous posts, we already have an OpenOCD configuration file setup, a linker script setup, and a Makefile setup. All that really remains is for us to write the code, build it, and flash it to our device. The code below is very STM32F3DISCOVERY specific; that is, it very much requires that the GPIO for the LED be on GPIO bank E, pin 12 on the board. If you have one of the other STM32 DISCOVERY boards, you'll need to look at the schematics and find one of the GPIOs that are hooked to an LED.

We are going to take an extremely simple example from libopencm3. This example does nothing more than blink one of the LEDs on the board on and off continuously. While this is simple, it will validate that everything that we've done before is actually correct.

Here is the code:

$ cd ~/stm32-project
$ cat <<EOF > tut.c
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>

static void gpio_setup(void)
{
/* Enable GPIOE clock. */
rcc_peripheral_enable_clock(&RCC_AHBENR, RCC_AHBENR_IOPEEN);

/* Set GPIO12 (in GPIO port E) to 'output push-pull'. */
gpio_mode_setup(GPIOE, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE,
GPIO12);
}

int main(void)
{
int i;

gpio_setup();

/* Blink the LED (PC8) on the board. */
while (1) {
/* Using API function gpio_toggle(): */
gpio_toggle(GPIOE, GPIO12); /* LED on/off */
for (i = 0; i < 2000000; i++) /* Wait a bit. */
__asm__("nop");
}

return 0;
}
EOF
You should now be able to type "make", and the thing should build. Typing "make flash" should run OpenOCD, install the program to the board, and start blinking an LED. Remember that our Makefile required sudo access to actually run openocd. If you don't have sudo access, you can either add sudo access (by adding your user to the wheel group), or just su to root and run the openocd command by hand.

Developing STM32 microcontroller code on Linux (Part 6 of 8, building and linking STM32 programs)

The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. This post will cover linker scripts and command-line options necessary for building and linking programs to run on the STM32.

Once we have all of the previous steps done, we are achingly close to being able to build and run code on our target STM32 processor. However, there is one more set of low-level details that we have to understand before we can get there. Those details revolve around how our C code gets turned into machine code, and how that code is laid out in memory.

As you may know, compiling code to run on a target is roughly a two-step process:
  1. Turn C/C++ code into machine code the target processor understands. The output of this step are what are known as object files.
  2. Take the object files and link them together to form a coherent binary. The output of this step is generally an ELF file.
Let's talk about these two steps in more detail.

Compile step

During compilation, the compiler parses the C/C++ code and turns it into an object file. A little more concretely, what we want to have our cross-compiler do is to take our C code, turn it into ARM instructions that can run on the STM32, and then output that into object files.

To do this, we use our cross-compiler. As with any version of gcc, there are many flags that can be passed to our cross-compiler, and they can have many effects on the code that is output. What I'm going to present here is a set of flags that I've found works pretty well. This isn't necessarily optimal in any dimension, but will at least serve as a starting point for our code. I'll also point out that this is where we start to get into the differences between the various STM32F* processors. For instance, the STM32F4 processor has an FPU, while the STM32F3 does not. This will affect the flags that we will pass to the compiler.

For the STM32F3, Cortex-M3 processor that I am using, here are the compiler flags: -Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD
Let's go through each of them. The -W* flags tell the compiler to generate compile-time warnings for several classes of common errors. I find that enabling these warnings and getting rid of them usually makes the code much better. The -g flag tells the compiler to include debugging symbols in the binary; this makes the code easier to debug, at the expense of some code space. The -fno-common flag tells gcc to place uninitialized global variables into the data section of the binary, which improves performance a bit. The -mcpu=cortex-m3 flag tells the compiler that we have a Cortex-M3, and thus to generate code optimized for the Cortex-M3. The -mthumb flag tells gcc to generate ARM thumb code, which is smaller and more compact than full ARM code. The -mfloat-abi=hard flag tells gcc that we want to use a hard float ABI; this doesn't make a huge difference on a processor without an FPU, but is a good habit to get into. Finally, the -MD flag tells gcc to generate dependency files while compiling, which is useful for Makefiles.

Linking step

Once all of the individual files have been compiled, they are put together into the final binary by the linker. This is more complicated when targeting an embedded platform vs. a regular program. In particular, we have to tell the linker not only which files to link together, but also how to lay the resulting binary out on flash and in memory.

We'll first start by talking about the flags that we need to pass to the linker to make this work. Here are the set of flags we are going to start with: --static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map
Again, let's go through each of them. The --static flag tells the linker to link a static, not a dynamically linked, binary. This flag probably isn't strictly necessary in this case, but we add it anyway. The -lc flag tells the linker to link this binary against the C library, which is newlib in our case. That gives us access to various convenient functions, such as printf(), scanf(), etc. The -lnosys flag tells the linker to link this binary against the "nosys" library. Several of the convenience functions in the C library require underlying implementations of certain functions to operate, such as _write() for printf(). Since we don't have a POSIX operating system that can provide these for us, the nosys library provides empty stub functions for these. If we want, we can later on define our own versions of these stub functions that will get used instead. The -T tut.ld flag tells the linker to use tut.ld as the linker script; we'll talk more about linker scripts below. The -nostartfiles flag tells the linker not to use standard system startup files. Since we don't have an OS here, we can't rely on the standard OS utilities to start our program up. The -Wl,--gc-sections flag tells the linker to garbage collect unused sections. That is, any sections that are not referenced are removed from the resulting binary, which can shrink the binary. The -mcpu=cortex-m3, -mthumb, and -mfloat-abi=hard flags have the same meaning as for the compile flags. The -lm flag tells the linker to link this binary against the math library. It isn't strictly required for our little programs, but most programs want it sooner or later. Finally, the -Wl,-Map=tut.map tells the linker to generate a map file and stick it into tut.map. The map file is helpful for debugging, but is informational only.

Linker script

As mentioned before, the linker script tells the linker how to lay out the resulting binary in memory. This script is highly chip specific. The details have to do with where the processor jumps to on reset, and where it expects certain things to be. Note that most chips are actually configurable (based on some jumper settings), so where it jumps to on reset can change. Luckily, for most off-the-shelf STM32 designs, including the DISCOVERY boards, it is always configured to expect the code to start out in flash. Therefore, the linker script tells the linker to lay out the code in flash, but to put the data and bss in RAM.

With all that said, libopencm3 actually makes this easy on you. They have default linker scripts for each of the chips that are supported. All you really need to do is to fill in a small linker script with the RAM and FLASH size of your chip, include the default libopencm3 one, and away you go.

So we are going to put all of the above together and write a Makefile and a linker script into the project directory we created in the last tutorial. Neither of these are necessarily the best examples of what to do, but they will get the job done. First the Makefile:

$ cd ~/stm32-project
$ cat <<EOF > Makefile
CC=arm-none-eabi-gcc
LD=\$(CC)
OBJCOPY=arm-none-eabi-objcopy
OPENOCD=~/opt/cross/bin/openocd
CFLAGS=-Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD -DSTM32F3
LDFLAGS=--static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map
OBJS=tut.o

all: tut.bin

tut.bin: tut.elf
$( echo -e "\t" )\$(OBJCOPY) -Obinary tut.elf tut.bin

tut.elf: \$(OBJS)
$( echo -e "\t" )\$(CC) -o tut.elf \$(OBJS) ~/opt/cross/arm-none-eabi/lib/libopencm3_stm32f3.a --static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map

flash: tut.bin
$( echo -e "\t" )sudo \$(OPENOCD) -f stm32-openocd.cfg -c "init" -c "reset init" -c "flash write_image erase tut.bin 0x08000000" -c "reset run" -c "shutdown"

clean:
$( echo -e "\t")rm -f *.elf *.bin *.list *.map *.o *.d *~
EOF
You should notice a couple of things in the Makefile. First, we use all of the compiler and linker flags that we talked about earlier. Second, our objects ($OBJS) are tut.c, which we'll create in the next post. And third, we have a flash target that will build the project and flash it onto the target processor. This requires the OpenOCD configuration file that we created a couple of posts ago.

Now the linker script:

$ cat <<EOF > tut.ld
MEMORY
{
rom (rx) : ORIGIN = 0x08000000, LENGTH = 256K
ram (rwx) : ORIGIN = 0x20000000, LENGTH = 40K
}

/* Include the common ld script. */
INCLUDE libopencm3_stm32f3.ld
EOF
You'll notice that there isn't a lot here. We just have to define the RAM location and size, and the ROM (Flash) location and size, and the default libopencm3 linker script will take care of the rest.

We now have all of the parts in place. The next post will write, compile, and run a simple program on the board.

Developing STM32 microcontroller code on Linux (Part 6 of 8, building and linking STM32 programs)

The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. The fifth post covered building the device library, libopencm3. This post will cover linker scripts and command-line options necessary for building and linking programs to run on the STM32.

Once we have all of the previous steps done, we are achingly close to being able to build and run code on our target STM32 processor. However, there is one more set of low-level details that we have to understand before we can get there. Those details revolve around how our C code gets turned into machine code, and how that code is laid out in memory.

As you may know, compiling code to run on a target is roughly a two-step process:
  1. Turn C/C++ code into machine code the target processor understands. The output of this step are what are known as object files.
  2. Take the object files and link them together to form a coherent binary. The output of this step is generally an ELF file.
Let's talk about these two steps in more detail.

Compile step

During compilation, the compiler parses the C/C++ code and turns it into an object file. A little more concretely, what we want to have our cross-compiler do is to take our C code, turn it into ARM instructions that can run on the STM32, and then output that into object files.

To do this, we use our cross-compiler. As with any version of gcc, there are many flags that can be passed to our cross-compiler, and they can have many effects on the code that is output. What I'm going to present here is a set of flags that I've found works pretty well. This isn't necessarily optimal in any dimension, but will at least serve as a starting point for our code. I'll also point out that this is where we start to get into the differences between the various STM32F* processors. For instance, the STM32F4 processor has an FPU, while the STM32F3 does not. This will affect the flags that we will pass to the compiler.

For the STM32F3, Cortex-M3 processor that I am using, here are the compiler flags: -Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD
Let's go through each of them. The -W* flags tell the compiler to generate compile-time warnings for several classes of common errors. I find that enabling these warnings and getting rid of them usually makes the code much better. The -g flag tells the compiler to include debugging symbols in the binary; this makes the code easier to debug, at the expense of some code space. The -fno-common flag tells gcc to place uninitialized global variables into the data section of the binary, which improves performance a bit. The -mcpu=cortex-m3 flag tells the compiler that we have a Cortex-M3, and thus to generate code optimized for the Cortex-M3. The -mthumb flag tells gcc to generate ARM thumb code, which is smaller and more compact than full ARM code. The -mfloat-abi=hard flag tells gcc that we want to use a hard float ABI; this doesn't make a huge difference on a processor without an FPU, but is a good habit to get into. Finally, the -MD flag tells gcc to generate dependency files while compiling, which is useful for Makefiles.

Linking step

Once all of the individual files have been compiled, they are put together into the final binary by the linker. This is more complicated when targeting an embedded platform vs. a regular program. In particular, we have to tell the linker not only which files to link together, but also how to lay the resulting binary out on flash and in memory.

We'll first start by talking about the flags that we need to pass to the linker to make this work. Here are the set of flags we are going to start with: --static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map
Again, let's go through each of them. The --static flag tells the linker to link a static, not a dynamically linked, binary. This flag probably isn't strictly necessary in this case, but we add it anyway. The -lc flag tells the linker to link this binary against the C library, which is newlib in our case. That gives us access to various convenient functions, such as printf(), scanf(), etc. The -lnosys flag tells the linker to link this binary against the "nosys" library. Several of the convenience functions in the C library require underlying implementations of certain functions to operate, such as _write() for printf(). Since we don't have a POSIX operating system that can provide these for us, the nosys library provides empty stub functions for these. If we want, we can later on define our own versions of these stub functions that will get used instead. The -T tut.ld flag tells the linker to use tut.ld as the linker script; we'll talk more about linker scripts below. The -nostartfiles flag tells the linker not to use standard system startup files. Since we don't have an OS here, we can't rely on the standard OS utilities to start our program up. The -Wl,--gc-sections flag tells the linker to garbage collect unused sections. That is, any sections that are not referenced are removed from the resulting binary, which can shrink the binary. The -mcpu=cortex-m3, -mthumb, and -mfloat-abi=hard flags have the same meaning as for the compile flags. The -lm flag tells the linker to link this binary against the math library. It isn't strictly required for our little programs, but most programs want it sooner or later. Finally, the -Wl,-Map=tut.map tells the linker to generate a map file and stick it into tut.map. The map file is helpful for debugging, but is informational only.

Linker script

As mentioned before, the linker script tells the linker how to lay out the resulting binary in memory. This script is highly chip specific. The details have to do with where the processor jumps to on reset, and where it expects certain things to be. Note that most chips are actually configurable (based on some jumper settings), so where it jumps to on reset can change. Luckily, for most off-the-shelf STM32 designs, including the DISCOVERY boards, it is always configured to expect the code to start out in flash. Therefore, the linker script tells the linker to lay out the code in flash, but to put the data and bss in RAM.

With all that said, libopencm3 actually makes this easy on you. They have default linker scripts for each of the chips that are supported. All you really need to do is to fill in a small linker script with the RAM and FLASH size of your chip, include the default libopencm3 one, and away you go.

So we are going to put all of the above together and write a Makefile and a linker script into the project directory we created in the last tutorial. Neither of these are necessarily the best examples of what to do, but they will get the job done. First the Makefile:

$ cd ~/stm32-project
$ cat <<EOF > Makefile
CC=arm-none-eabi-gcc
LD=\$(CC)
OBJCOPY=arm-none-eabi-objcopy
OPENOCD=~/opt/cross/bin/openocd
CFLAGS=-Wall -Wextra -Wimplicit-function-declaration -Wredundant-decls -Wstrict-prototypes -Wundef -Wshadow -g -fno-common -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -MD -DSTM32F3
LDFLAGS=--static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map
OBJS=tut.o

all: tut.bin

tut.bin: tut.elf
$( echo -e "\t" )\$(OBJCOPY) -Obinary tut.elf tut.bin

tut.elf: \$(OBJS)
$( echo -e "\t" )\$(CC) -o tut.elf \$(OBJS) ~/opt/cross/arm-none-eabi/lib/libopencm3_stm32f3.a --static -lc -lnosys -T tut.ld -nostartfiles -Wl,--gc-sections -mcpu=cortex-m3 -mthumb -mfloat-abi=hard -lm -Wl,-Map=tut.map

flash: tut.bin
$( echo -e "\t" )sudo \$(OPENOCD) -f stm32-openocd.cfg -c "init" -c "reset init" -c "flash write_image erase tut.bin 0x08000000" -c "reset run" -c "shutdown"

clean:
$( echo -e "\t")rm -f *.elf *.bin *.list *.map *.o *.d *~
EOF
You should notice a couple of things in the Makefile. First, we use all of the compiler and linker flags that we talked about earlier. Second, our objects ($OBJS) are tut.c, which we'll create in the next post. And third, we have a flash target that will build the project and flash it onto the target processor. This requires the OpenOCD configuration file that we created a couple of posts ago.

Now the linker script:

$ cat <<EOF > tut.ld
MEMORY
{
rom (rx) : ORIGIN = 0x08000000, LENGTH = 256K
ram (rwx) : ORIGIN = 0x20000000, LENGTH = 40K
}

/* Include the common ld script. */
INCLUDE libopencm3_stm32f3.ld
EOF
You'll notice that there isn't a lot here. We just have to define the RAM location and size, and the ROM (Flash) location and size, and the default libopencm3 linker script will take care of the rest.

We now have all of the parts in place. The next post will write, compile, and run a simple program on the board.

Developing STM32 microcontroller code on Linux (Part 5 of 8, building libopencm3)

The first post of this series covered the steps to build and run code for the STM32. The second post covered how to build a cross-compiler for the STM32. The third post covered how to build a debugger for the STM32. The fourth post covered building and configuring OpenOCD for your development environment. This post will cover building the device library, libopencm3.

As mentioned in the introductory post, it makes our life a lot easier if we use a device library. This is a library that abstracts the low-level details of the hardware registers away from us, and gives us a nice consistent API to use. While ST provides one these directly, it is not open-source (or more specifically, it's open-source status is murky). Luckily there is libopencm3, an open-source re-implementation that is also a better library in my opinion. As usual, I'm going to compile a certain version of libopencm3; newer or later versions may or may not work better for you.

As before, we start out by exporting some environment variables:

$ export TOPDIR=~/cross-src
$ export TARGET=arm-none-eabi
$ export PREFIX=~/opt/cross
$ export BUILDPROCS=$( getconf _NPROCESSORS_ONLN )
$ export PATH=$PREFIX/bin:$PATH
The TOPDIR environment variable is the directory in which the sources are stored. The TARGET environment variable is the architecture that we want our compiler to emit code for. For ARM chips without an operating system (like the STM32), we want arm-none-eabi. The PREFIX environment variable is the location we want our cross-compile tools to end up in; feel free to change this to something more suitable. The BUILDPROCS environment variable is the number of processors that we can use; we will use all of them while building to substantially speed up the build process. Finally, we need to add the location of the cross-compile binaries to our PATH so that later building stages can find it.

Now that we have our environment set up, we can get the code. Note that unlike most of the other tools covered in this tutorial, libopencm3 does not do releases. They expect (more specifically, require) that you clone the latest version and use that. That's what we are going to do here. As of this writing, the latest libopencm3 git hash tag is a909b5ca9e18f802e3caef19e63d38861662c128. Since the libopencm3 developers don't guarantee API stability, all of the steps below will assume the API as of that hash tag. If you decide to use a newer version of libopencm3, you may have to update the example code I give you to conform to the new API. With that out of the way, let's get it:

$ sudo yum install git
$ cd $TOPDIR
$ git clone git://github.com/libopencm3/libopencm3.git
$ cd libopencm3
$ git checkout -b clalancette-tutorial \
a909b5ca9e18f802e3caef19e63d38861662c128
What we've done here is to clone the repository, then checkout a new branch with the head at hash a909b5ca9e18f802e3caef19e63d38861662c128. This ensures that even if the library moves forward in the future, we will always use that hash tag for the purposes of this tutorial. Next we build the library:

$ unset PREFIX
$ make DETECT_TOOLCHAIN=1
$ make DETECT_TOOLCHAIN=1 install
$ export PREFIX=~/opt/cross
Here we need to unset PREFIX because libopencm3 uses PREFIX for the toolchain name prefix (arm-none-eabi), not the path prefix. Once we've done that, we can tell libopencm3 to detect the toolchain, and then use it to build libopencm3. Finally we use the install target to install the headers and the static libraries (.a files) to our toolchain. Assuming this is successful, everything necessary should be in ~/opt/cross/arm-none-eabi/, with the libraries in lib/libopencm3* and the header files in include/libopencm3. Note that there is one .a file per chip that is supported by libopencm3; we'll return to this later when we start building code for our chip.