An errbot snap for simplified chatops

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
>>> !help
All commands
!tryme - Execute to check if Errbot responds to command.
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the file, set the following values:

    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : '',
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/
    plugs: [home, network, network-bind]

    plugin: python
    python-packages: [errbot, irc]
    source: .
    plugin: dump
      - plugins
    after: [errbot]

And we need to change a few more values in to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Deploy to all SBCs with Gobot and a single snap package

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (


func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {

  robot := gobot.NewRobot("snapbot",


In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (


func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {

  robot := gobot.NewRobot("snapbot",


We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

    command: gobot-blink

    source: .
    plugin: go

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let to take care of building and pushing for me.

Here is the source code for this simple example:

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

User acceptance testing of snaps, with Travis

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

    command: ipfs
    plugs: [home, network, network-bind]

    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/
      cp -R . ../go/src/
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/ install
    install: |
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:


  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:


  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

“Mapillaryando” en cantinas de San José

-- por Marcia Ugarte y Joaquín Lizano

Un sábado por la tarde resultó ser el momento perfecto para que un grupo de personas se juntara en San José para iniciar la labor de contribuir al mapeo libre esta vez de cantinas del centro de la capital. Cantina en Costa Rica hace referencia a un bar popular, que posiblemente tenga sus añitos, sin aires snobs y que tiene alcohol y comida a buen precio, o solo alcohol.

Este grupo sacrificado trazó un plan de visita y mínimas normas para el proceso de mapeo: tomar máximo una cerveza en cada sitio y comer donde se pudiera ojalá una boca conocida del bar correspondiente. La ruta inició en El Gran Vicio, cantina de toda la vida dentro del Mercado Central; continuó al Ballestero, la única cantina que queda en una de las cuatro esquinas de entrada al San José de antaño y que dicen tiene uno de los mejores chifrijos; luego pasamos a La Embajada, bar chirrión con una barra larga larga y famoso por el gallo de chorizo; después le tocó al turno a El Faro, cantina de 3 pisos con hora feliz de cerveza a menos de mil colones y buena costilla; el siguiente fue La Bohemia, después Wongs y lxs últimxs valientes terminaron de madrugada en Area City.

Detallando el recorrido, el punto de inicio, “El Gran Vicio”, es probablemente una de las cantinas más viejas de San José. Ubicado en el Mercado Central de la ciudad, abrió sus puertas en 1880. Podríamos decir que es tan viejo que pareciera que es solo para hombres. El orinal está en una esquina del bar y la puerta no cierra ni abre, está como medio puesta y no resguarda aquella privacidad que se espera de un servicio sanitario. ¿Baño de mujeres? no hay. La pared opuesta a la barra de este espacio, que funciona como un pasillo debido a su estrechez, está llena de firmas, mensajes y memorias gráficas.

  • clientela que se conoce entre sí, algunos con sus uniformes de trabajo del mercado
  • el cantinero no era muy amable con los extraños (nosotros…)
  • es un bar de paso (paso a tomarme una cerveza y/o un trago y me voy)
  • interesante experiencia

De ahí partimos al “Ballestero”. Pocas cantinas tienen plantas naturales a la entrada. Seña que vamos por una experiencia diferente. Está situada en una de las esquinas del cierre de la calle ancha que da entrada a la ciudad capital desde el Norte. Desde la mesa de la esquina consumimos felices las bocas (dar fe que los patacones con frijoles son de lo mejor de la ciudad) mientras admiramos la bola disco de espejitos en el centro del techo (sin luces dirigidas, sin mecanismos para que gire), un antojo de los dueños para dar simbolismos fiesteros al lugar. Tal vez no combina la bola con la colección de vasos y las fotos familiares en la paredes, tal vez ese es justo el estilo que andaban buscando.

  • música texmex
  • chilera de la casa
  • tarjetas de crédito no bienvenidas (pero aceptadas si se insiste mucho)
  • mejor llevar cash

Debería haber también mención a los trayectos. Las caminatas de unxs jóvenes (y otros no tanto) caminando con sus celulares en posición horizontal y más arriba de sus cabezas, grabando el camino, siguiendo a su “líder” que camina con un báculo tecnológico con un ojo en las alturas. Digamos que no pasamos desapercibidxs por el público josefino. Posiblemente si unx nos viera pasar así de la nada… seguramente que tampoco sabría qué pensar… turistas, extraterrestres, geeks haciendo una peli/docu de chepe, buscando pokemones…?

El siguiente bar fue “La Embajada” que terminó siendo denominada la nueva embajada de Mapillary en Costa Rica. Su principal característica es la barra enorme que abarca gran parte del espacio y que da la impresión de que si nos animamos a llegar al final nos comerá la oscuridad, pero no. El fondo está repleto de mesas y hay suficiente espacio para todo el grupo y un mariachi que se acomoda al final de la barra. Realmente sorprende que exista un espacio tan grande y que fácilmente unx pase por fuera sin darse cuenta de lo que hay dentro.

  • la birra a 900 parece que era una publicidad vieja que no han eliminado. Costaba 1000
  • barra muy larga
  • los gallos de chorizo o salchichón vienen sin tortilla
  • mariachi compite con “música de cabina”
  • no se separan cuentas
  • muy concurrido

Seguimos el trayecto en un atardecer que afectaba, dada la cada vez menos luz, la posibilidad de mapear al caminar; pero “sacrificadamente” hicimos todo lo que estaba en nuestras manos para no frenar el mapeo. Caminamos por el puro centro de Chepe y llegamos al Faro. Una vieja edificación de tres pisos con vista al sur de San José. Ya cuan lejos logre ver unx desde este faro dependerá, entre otras cosas, de cuanto se enfieste en el lugar. Cada piso es un ambiente; de hecho, el tercer piso es para fiestas y no está abierto normalmente. Abajo estaba lleno por lo que fuimos al segundo piso, que además tenía un rock/pop ochentero apreciado por la mayoría del grupo. Las ventanas abiertas generaban un nivel de ventolero que, de querer enviajarse, podria referir al faro y ambientarse unx quien sabe donde.

  • buena atención
  • separan cuentas
  • aceptan tarjetas
  • distintos ambientes
  • buena música

Ya enrumbados (más de rumba que de rumbo) bajamos una cuadra y llegamos a La Bohemia. Cantina de tradición hasta para algunxs de lxs miembrxs del grupo mapeante.

  • fácil sentirse bienvenidx en el lugar
  • fuerte conexión entre los clientes habituales
  • hasta nos regalaron del queque de un cumpleaños que estaban celebrando
  • pocas bocas
  • separan cuentas
  • aceptan tarjetas

Ya para esas horas de la noche, el trayecto se hacía inmapeable pero la intención sacrificada no se acababa. Fuimos a chequear otro par de lugares que podríamos incluir en futuras misiones. Wongs es un restaurante más que una cantina y en lugar de bocas fueron dumplings, generando un momento de excepción en la ruta. Terminamos en la madrugada del día siguiente en Area City celebrando ya un cumpleaños de alguien de nuestro grupo mapeante. Gran salida que, de seguir con ese espíritu festivo, puede generar muchxs sacrificadxs voluntarixs futurxs que nos permitan conocer más de esas cantinas tradicionales que aún quedan en San José.

La embajada de Mapillary en Costa Rica

Minar ethereum


  • Minador
  • Ubuntu Server 16.04
  • Opcional(tarjeta de video)
  • Python y python-twisted
  • Ethereum
  • cpp-ethereum

NOTA: Se considera Ubuntu Server, en caso de Ubuntu Desktop algunos requerimientos ya vienen instalados en el sistema.


  1. Instalar ubuntu 16.04

  2. Instalar python y python-wisted

sudo apt-get install python
sudo apt-get install python-twisted
  1. Un vez que se tiene instalado el sistema operativo, activar el ppa de ethereum
sudo add-apt-repository ppa:ethereum/ethereum
sudo add-apt-repository ppa:ethereum/ethereum-qt
sudo add-apt-repository ppa:ethereum/ethereum-dev
sudo apt-get update
  1. Instalar ethereum
sudo apt-get install ethereum
  1. Instalar cpp-ethereum
sudo apt-get install cpp-ethereum
  1. Clonar el repositorio de eth-proxy.

  2. Crear un wallet con geth o parity.

  3. Instalar los drivers de vídeo, en el caso de usar una tarjeta de vídeo.

  4. Modificar el archivo de configuración de eth-proxy para usar el wallet.

  5. En el directorio eth-proxy, ejecutar

sudo python eth-proxy/
  1. Ejecutar ethminer apuntando a localhost
ethminer -F -G

NOTA: La opción -G indica a ethminer que utilice GPU para minar, en caso de no contar con GPU utilice --allow-opencl-cpu.


Crowdtesting with the Ubuntu community: the case of IPFS

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield


If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Maperespeis #2: Volcán Poás

El domingo pasado fuimos a hacer mapas libres al Volcán Poás.

Esta es la segunda excursión geek del JaquerEspéis. De la primera aprendimos que había que esperar al verano porque con tormenta no se puede mapear. Y el día fue perfecto. No sólo estuvo soleado, sino que el cráter estaba totalmente despejado y así pudimos agregar un nuevo lugar al tour virtual de Costa Rica.

Además, esta vez llegamos mucho mejor preparados, con varios teléfonos con mapillary, osmand y OSMTracker, una cámara 360, un GPS Garmin, un dron y hasta una libreta y dos biólogos.

La procesión del MaperEspeis

Así funciona el asunto. Todos y todas con el GPS del teléfono activado esperamos a que el teléfono encuentre la ubicación. Después cada persona usa la aplicación que prefiere para recolectar datos: fotos, audios, videos, notas de texto, trazas, anotaciones en la libreta...

Luego, en nuestras respectivas casas, subimos, publicamos y compartimos todos los datos recolectados. Estos nos sirven para mejorar los mapas libres de OpenStreetMap. Agregamos desde cosas tan sencillas como la ubicación de un basurero hasta cosas tan importantes como qué tan accesible es el lugar para una persona en silla de ruedas, junto con la ubicación de todos estos accesos o las partes en las que faltan. Cada persona mejora el mapa un poquito, en la zona que conoce o por la que pasó. Con más de 3 millones de usuarios, OpenStreetMap es el mejor mapa del mundo que existe; y es de particular importancia en zonas como la nuestra, que tienen poco potencial económico para las megacorporaciones que hacen y venden mapas cerrados robando datos privados a sus usuarios.

Como los mapas que hacemos son libres, lo que sigue no tiene límites. Hay grupos trabajando en reconstrucción de modelos tridimensionales a partir de las fotos, identificación e interpretación de señales y rótulos, aplicaciones que calculan la ruta óptima para llegar a cualquier lugar usando cualquier combinación de medios de transporte, aplicaciones para asistir en la toma de decisiones al diseñar el futuro de una ciudad, y muchas otras cosas más. Todo basado en conocimiento compartido y comunidad.

La imagen de arriba es el tour virtual en Mapillary. Como lo grabamos con la cámara 360, pueden hacer clic y arrastrar con el mouse para ver todos los ángulos. También pueden hacer clic arriba, en el botón de reproducir para seguir el camino que tomamos. O pueden hacer clic en cualquier punto verde en el mapa para seguir su propio camino.

Muchas gracias a todos y todas por apuntarse a mapear, en especial a Denisse y Charles por servirnos de guías y llenar el paseo de datos interesantes sobre la flora, fauna, geología e importancia histórica del Poás.

Miembros del MaperEspeis (Aquí más fotos y videos)

El próximo maperespeis será el 12 de marzo.