Cómo instalar Firma Digital de Costa Rica en GNU/Linux Fedora 26

Esta guía documenta cómo instalar el controlador de la tarjeta de Firma Digital de Costa Rica y la jerarquía de certificados del Banco Central (SINPE) y del MICITT en el sistema operativo Fedora de arquitectura Intel de 64 bits (x86_64).

El motivo de esta nueva guía de instalación tenía los siguientes propósitos:

  • Configurar de la forma más sencilla y adecuada el sistema para que funcione con la mayor cantidad de programas.
  • Lograr que funcione para todos los usuarios del sistema, incluyendo los nuevos usuarios creados tras las instalación.
  • Funcionar con servicios obsoletos como el de la CCSS (con applet Java) en el navegador Icecat.

Instalación de las dependencias

  • Instalar el soporte CCID de PC/SC para que reconozca el lector de tarjetas y el plugin NPAPI IcedTea-Web para poder cargar el applet Java que permite firmar desde el navegador Firefox:
sudo dnf -y install pcsc-lite-ccid icedtea-web icecat

sudo systemctl enable pcscd.service

sudo systemctl start pcscd.service

Descarga del “instalador”

  • Descargar el “instalador” en el desplegable llamado “Usuarios Linux” en la página de descarga de instaladores del sitio web de Soporte Firma Digital de Costa Rica, introduciendo el número de serie de la tarjeta y el captcha.

Desempaquetado del “instalador”

  • Descomprimir el archivo zip descargado con unzip, en el momento de escribir esta documentación se llama sfd_ClientesLinux_Rev08.zip. Se creará una carpeta llamada Firma Digital. Se asume que el archivo zip se ha descargado en la carpeta Descargas:
cd ~/Descargas
unzip sfd_ClientesLinux_Rev08.zip

Instalación de los certificados

Es necesario agregar a la lista de confianza la jerarquía de certificados del SINPE y del MICITT. Para ello, un par de comandos:

  • Copiar los certificados:
sudo cp ~/Descargas/Firma\ Digital/Certificados/* /usr/share/pki/ca-trust-source/anchors/
  • Regenerar los archivos de certificados para todas las aplicaciones:
sudo update-ca-trust

Instalación del módulo PKCS#11

Aunque hay un módulo en el directorio Librerías, no es la versión más reciente y tiene varios defectos de enlazado. La versión distribuida en el paquete PinTool es más reciente y funciona correctamente en todos los programas probados. En el siguiente proceso se extrae y se instala conservando la fecha original de la librería y con los permisos correctos de usuario y de SELinux.

  • Instalar el módulo PKCS#11 propietario en /usr/lib64/pkcs11:
cd ~/Descargas/Firma\ Digital/PinTool/IDProtect\ PINTool\ 6.41.01/RPM

rpm2cpio idprotectclient-641.01-0.x86_64.rpm | cpio -dim ./usr/lib/x64-athena/libASEP11.so

sudo mv usr/lib/x64-athena/libASEP11.so /usr/lib64/pkcs11/

sudo chown root:root /usr/lib64/pkcs11/libASEP11.so

sudo chmod 755 /usr/lib64/pkcs11/libASEP11.so

sudo chcon system_u:object_r:lib_t:s0 /usr/lib64/pkcs11/libASEP11.so
  • Crear los siguientes enlaces simbólicos (necesarios para que funcionen algunos programas y applets):
sudo ln -s /usr/lib64/pkcs11/libASEP11.so /usr/lib64/

sudo ln -s /usr/lib64/pkcs11/libASEP11.so /usr/lib/

sudo mkdir -p /usr/lib/x64-athena/

sudo ln -s /usr/lib64/pkcs11/libASEP11.so /usr/lib/x64-athena/
  • Si se va a trabajar con el applet de la CCSS se puede realizar el siguiente paso opcional:
sudo mkdir -p /Firma_Digital/LIBRERIAS/

sudo ln -s /usr/lib/libASEP11.so /Firma_Digital/LIBRERIAS/

sudo ln -s /usr/share/pki/ca-trust-source/anchors/ /Firma_Digital/CERTIFICADOS
  • Crear el fichero /etc/Athena/IDPClientDB.xml y abrirlo para edición:
sudo mkdir /etc/Athena/

su -c "gedit /etc/Athena/IDPClientDB.xml"
  • En la ventana del editor de textos gedit, pegar el siguiente texto, guardar y cerrar el editor:
<?xml version="1.0" encoding="utf-8" ?>
<IDProtect>
 <TokenLibs>
  <IDProtect>
   <Cards>
    <IDProtectXF>
     <ATR type='hexBinary'>3BDC00FF8091FE1FC38073C821106600000000000000</ATR>
     <ATRMask type='hexBinary'>FFFF00FFF0FFFFFFFFFFFFFFFFF0FF00000000000000</ATRMask>
    </IDProtectXF>
   </Cards>
  </IDProtect>
 </TokenLibs>
</IDProtect>
  • Crear un fichero llamado /etc/pkcs11/modules/firmadigital.module y abrirlo para edición:
su -c gedit "/etc/pkcs11/modules/firmadigital.module"
  • En la ventana del editor de textos gedit, pegar el siguiente texto, guardar y cerrar el editor:
module: libASEP11.so
  • Ejecutar el siguiente comando para reemplazar el enlace simbólico a libnssckbi para que haga uso de p11-kit-proxy de forma prioritaria:
sudo alternatives --install /usr/lib64/libnssckbi.so libnssckbi.so.x86_64 /usr/lib64/p11-kit-proxy.so 50

Eso es todo. Es necesario reiniciar Firefox, Evolution y cualquier otra aplicación que use certificados para que se apliquen los cambios. Si se ha insertado el lector y la tarjeta al lector, estas aplicaciones preguntarán por el PIN, lo que indicará que se la instalación ha sido exitosa.

Si el componente de firma del Banco Central está instalado debería funcionar para poder realizar la prueba de firma.

Si opcionalmente se quiere probar el applet java Obsoleto en la página de la CCSS, no funciona a partir de Firefox 54, hay que usar el navegador Icecat y desactivar varias extensiones incluidas que bloquean JavaScript para que funcione. En el navegador aparecerá que se quiere ejecutar “IcedTea-Web”, hay que permitirlo. Si el navegador hace preguntas sobre el applet responder afirmativamente y aceptar a todos los cuadros de mensaje que aparezcan e ingresar el PIN cuando lo solicite.

HolaMundo con ethereum

Lista de herramientas

  • testRPC
  • nvm
  • web3
  • geth
  • solc
  • web3

Instalación

Linux

Los siguientes pasos muestras como instalar las herramientas necesarias en Ubuntu 17.04

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
nvm ls-remote
nvm install <la ultima LTS>
npm install -g ethereumjs-testrpc
npm install solc
npm install web3

Mac

Para instalar nvm es necesario tener brew

brew install nvm
nvm ls-remote
nvm install <la ultima LTS>
npm install -g ethereumjs-testrpc
npm install solc
npm install web3

Desarrollo

Con el editor preferido, escribir el contrato, para esto se va a utilizar el lenguage solidity, sin embargo existen otras opciones como serpent.

Para compilar el contrato vamos a utilizar el comando solc --bin --optimize <archivo.sol>

Escribit el siguiente contrato en un archivo llamado Voting.sol

pragma solidity ^0.4.11;
// We have to specify what version of compiler this code will compile with

contract Voting {
  /* mapping field below is equivalent to an associative array or hash.
  The key of the mapping is candidate name stored as type bytes32 and value is
  an unsigned integer to store the vote count
  */

  mapping (bytes32 => uint8) public votesReceived;

  /* Solidity doesn't let you pass in an array of strings in the constructor (yet).
  We will use an array of bytes32 instead to store the list of candidates
  */

  bytes32[] public candidateList;

  /* This is the constructor which will be called once when you
  deploy the contract to the blockchain. When we deploy the contract,
  we will pass an array of candidates who will be contesting in the election
  */
  function Voting(bytes32[] candidateNames) {
    candidateList = candidateNames;
  }

  // This function returns the total votes a candidate has received so far
  function totalVotesFor(bytes32 candidate) returns (uint8) {
    if (validCandidate(candidate) == false) throw;
    return votesReceived[candidate];
  }

  // This function increments the vote count for the specified candidate. This
  // is equivalent to casting a vote
  function voteForCandidate(bytes32 candidate) {
    if (validCandidate(candidate) == false) throw;
    votesReceived[candidate] += 1;
  }

  function validCandidate(bytes32 candidate) returns (bool) {
    for(uint i = 0; i < candidateList.length; i++) {
      if (candidateList[i] == candidate) {
        return true;
      }
    }
    return false;
  }
}

Pasos para desplegar el contrato

Ejecutar node

Mientras se ejecutan los comandos, se puede ver su salida y analizarla.

Web3 = require('web3')
web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"));

Listar las cuentas existentes en la red

web3.eth.accounts

Compilar el código

code = fs.readFileSync('Voting.sol').toString()
solc = require('solc')
compiledCode = solc.compile(code)
abiDefinition = JSON.parse(compiledCode.contracts[':Voting'].interface)
VotingContract = web3.eth.contract(abiDefinition)
byteCode = compiledCode.contracts[':Voting'].bytecode
deployedContract = VotingContract.new(['Rama','Nick','Jose'],{data: byteCode, from: web3.eth.accounts[0], gas: 4700000})
deployedContract.address
contractInstance = VotingContract.at(deployedContract.address)
> contractInstance.totalVotesFor.call('Rama')

{ [String: '0'] s: 1, e: 0, c: [ 0 ] }

> contractInstance.voteForCandidate('Rama', {from: web3.eth.accounts[0]})

'0xdedc7ae544c3dde74ab5a0b07422c5a51b5240603d31074f5b75c0ebc786bf53'

> contractInstance.voteForCandidate('Rama', {from: web3.eth.accounts[0]})

'0x02c054d238038d68b65d55770fabfca592a5cf6590229ab91bbe7cd72da46de9'

> contractInstance.voteForCandidate('Rama', {from: web3.eth.accounts[0]})

'0x3da069a09577514f2baaa11bc3015a16edf26aad28dffbcd126bde2e71f2b76f'

> contractInstance.totalVotesFor.call('Rama').toLocaleString()

'3'

Opcodes de la EVM

0s: Stop and Arithmetic Operations

0x00    STOP        Halts execution
0x01    ADD         Addition operation
0x02    MUL         Multiplication operation
0x03    SUB         Subtraction operation
0x04    DIV         Integer division operation
0x05    SDIV        Signed integer
0x06    MOD         Modulo
0x07    SMOD        Signed modulo
0x08    ADDMOD      Modulo
0x09    MULMOD      Modulo
0x0a    EXP         Exponential operation
0x0b    SIGNEXTEND  Extend length of two's complement signed integer

10s: Comparison & Bitwise Logic Operations

0x10    LT      Lesser-than comparison
0x11    GT      Greater-than comparison
0x12    SLT     Signed less-than comparison
0x13    SGT     Signed greater-than comparison
0x14    EQ      Equality  comparison
0x15    ISZERO  Simple not operator
0x16    AND     Bitwise AND operation
0x17    OR      Bitwise OR operation
0x18    XOR     Bitwise XOR operation
0x19    NOT     Bitwise NOT operation
0x1a    BYTE    Retrieve single byte from word

20s: SHA3

0x20    SHA3    Compute Keccak-256 hash

30s: Environmental Information

0x30    ADDRESS         Get address of currently executing account
0x31    BALANCE         Get balance of the given account
0x32    ORIGIN          Get execution origination address
0x33    CALLER          Get caller address. This is the address of the account that is directly responsible for this execution
0x34    CALLVALUE       Get deposited value by the instruction/transaction responsible for this execution
0x35    CALLDATALOAD    Get input data of current environment
0x36    CALLDATASIZE    Get size of input data in current environment
0x37    CALLDATACOPY    Copy input data in current environment to memory This pertains to the input data passed with the message call instruction or transaction
0x38    CODESIZE        Get size of code running in current environment
0x39    CODECOPY        Copy code running in current environment to memory
0x3a    GASPRICE        Get price of gas in current environment
0x3b    EXTCODESIZE     Get size of an account's code
0x3c    EXTCODECOPY     Copy an account's code to memory

40s: Block Information

0x40    BLOCKHASH   Get the hash of one of the 256 most recent complete blocks
0x41    COINBASE    Get the block's beneficiary address
0x42    TIMESTAMP   Get the block's timestamp
0x43    NUMBER      Get the block's number
0x44    DIFFICULTY  Get the block's difficulty
0x45    GASLIMIT    Get the block's gas limit

50s Stack, Memory, Storage and Flow Operations

0x50    POP         Remove item from stack
0x51    MLOAD       Load word from memory
0x52    MSTORE      Save word to memory
0x53    MSTORE8     Save byte to memory
0x54    SLOAD       Load word from storage
0x55    SSTORE      Save word to storage
0x56    JUMP        Alter the program counter
0x57    JUMPI       Conditionally alter the program counter
0x58    PC          Get the value of the program counter prior to the increment
0x59    MSIZE       Get the size of active memory in bytes
0x5a    GAS         Get the amount of available gas, including the corresponding reduction
0x5b    JUMPDEST    Mark a valid destination for jumps

60s & 70s: Push Operations

0x60    PUSH1   Place 1 byte item on stack
0x61    PUSH2   Place 2-byte item on stack
…
0x7f    PUSH32  Place 32-byte (full word) item on stack

80s: Duplication Operations

0x80    DUP1    Duplicate 1st stack item
0x81    DUP2    Duplicate 2nd stack item
…
0x8f    DUP16   Duplicate 16th stack item

90s: Exchange Operations

0x90    SWAP1   Exchange 1st and 2nd stack items
0x91    SWAP2   Exchange 1st and 3rd stack items
…   …
0x9f    SWAP16  Exchange 1st and 17th stack items

a0s: Logging Operations

0xa0    LOG0    Append log record with no topics
0xa1    LOG1    Append log record with one topic
…   …
0xa4    LOG4    Append log record with four topics

f0s: System operations

0xf0    CREATE          Create a new account with associated code
0xf1    CALL            Message-call into an account
0xf2    CALLCODE        Message-call into this account with alternative account's code
0xf3    RETURN          Halt execution returning output data
0xf4    DELEGATECALL    Message-call into this account with an alternative account's code, but persisting the current values for `sender` and `value`

Halt Execution, Mark for deletion

0xff    SELFDESTRUCT    Halt execution and register account for later deletion

Referencias

  • https://github.com/creationix/nvm
  • https://github.com/ethereumjs/testrpc
  • https://en.wikipedia.org/wiki/Remote_procedure_call
  • http://ethdocs.org/en/latest/network/test-networks.html
  • https://medium.com/@doart3/ethereum-dapps-without-truffle-compile-deploy-use-it-e6daeefcf919
  • https://medium.com/@mvmurthy/full-stack-hello-world-voting-ethereum-dapp-tutorial-part-1-40d2d0d807c2
  • https://github.com/ethereum/yellowpaper

Notas

testrpc -n5

An errbot snap for simplified chatops

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Deploy to all SBCs with Gobot and a single snap package

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

User acceptance testing of snaps, with Travis

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

apps:
  ipfs:
    command: ipfs
    plugs: [home, network, network-bind]

parts:
  ipfs:
    source: https://github.com/ipfs/go-ipfs.git
    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/github.com/ipfs/go-ipfs
      cp -R . ../go/src/github.com/ipfs/go-ipfs
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/github.com/ipfs/go-ipfs install
    install: |
      mkdir $SNAPCRAFT_PART_INSTALL/bin
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
  go:
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

script:
  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:

[...]

script:
  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:

[...]

script:
  [...]
  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

“Mapillaryando” en cantinas de San José

-- por Marcia Ugarte y Joaquín Lizano

Un sábado por la tarde resultó ser el momento perfecto para que un grupo de personas se juntara en San José para iniciar la labor de contribuir al mapeo libre esta vez de cantinas del centro de la capital. Cantina en Costa Rica hace referencia a un bar popular, que posiblemente tenga sus añitos, sin aires snobs y que tiene alcohol y comida a buen precio, o solo alcohol.

Este grupo sacrificado trazó un plan de visita y mínimas normas para el proceso de mapeo: tomar máximo una cerveza en cada sitio y comer donde se pudiera ojalá una boca conocida del bar correspondiente. La ruta inició en El Gran Vicio, cantina de toda la vida dentro del Mercado Central; continuó al Ballestero, la única cantina que queda en una de las cuatro esquinas de entrada al San José de antaño y que dicen tiene uno de los mejores chifrijos; luego pasamos a La Embajada, bar chirrión con una barra larga larga y famoso por el gallo de chorizo; después le tocó al turno a El Faro, cantina de 3 pisos con hora feliz de cerveza a menos de mil colones y buena costilla; el siguiente fue La Bohemia, después Wongs y lxs últimxs valientes terminaron de madrugada en Area City.

Detallando el recorrido, el punto de inicio, “El Gran Vicio”, es probablemente una de las cantinas más viejas de San José. Ubicado en el Mercado Central de la ciudad, abrió sus puertas en 1880. Podríamos decir que es tan viejo que pareciera que es solo para hombres. El orinal está en una esquina del bar y la puerta no cierra ni abre, está como medio puesta y no resguarda aquella privacidad que se espera de un servicio sanitario. ¿Baño de mujeres? no hay. La pared opuesta a la barra de este espacio, que funciona como un pasillo debido a su estrechez, está llena de firmas, mensajes y memorias gráficas.

  • clientela que se conoce entre sí, algunos con sus uniformes de trabajo del mercado
  • el cantinero no era muy amable con los extraños (nosotros…)
  • es un bar de paso (paso a tomarme una cerveza y/o un trago y me voy)
  • interesante experiencia

De ahí partimos al “Ballestero”. Pocas cantinas tienen plantas naturales a la entrada. Seña que vamos por una experiencia diferente. Está situada en una de las esquinas del cierre de la calle ancha que da entrada a la ciudad capital desde el Norte. Desde la mesa de la esquina consumimos felices las bocas (dar fe que los patacones con frijoles son de lo mejor de la ciudad) mientras admiramos la bola disco de espejitos en el centro del techo (sin luces dirigidas, sin mecanismos para que gire), un antojo de los dueños para dar simbolismos fiesteros al lugar. Tal vez no combina la bola con la colección de vasos y las fotos familiares en la paredes, tal vez ese es justo el estilo que andaban buscando.

  • música texmex
  • chilera de la casa
  • tarjetas de crédito no bienvenidas (pero aceptadas si se insiste mucho)
  • mejor llevar cash

Debería haber también mención a los trayectos. Las caminatas de unxs jóvenes (y otros no tanto) caminando con sus celulares en posición horizontal y más arriba de sus cabezas, grabando el camino, siguiendo a su “líder” que camina con un báculo tecnológico con un ojo en las alturas. Digamos que no pasamos desapercibidxs por el público josefino. Posiblemente si unx nos viera pasar así de la nada… seguramente que tampoco sabría qué pensar… turistas, extraterrestres, geeks haciendo una peli/docu de chepe, buscando pokemones…?

El siguiente bar fue “La Embajada” que terminó siendo denominada la nueva embajada de Mapillary en Costa Rica. Su principal característica es la barra enorme que abarca gran parte del espacio y que da la impresión de que si nos animamos a llegar al final nos comerá la oscuridad, pero no. El fondo está repleto de mesas y hay suficiente espacio para todo el grupo y un mariachi que se acomoda al final de la barra. Realmente sorprende que exista un espacio tan grande y que fácilmente unx pase por fuera sin darse cuenta de lo que hay dentro.

  • la birra a 900 parece que era una publicidad vieja que no han eliminado. Costaba 1000
  • barra muy larga
  • los gallos de chorizo o salchichón vienen sin tortilla
  • mariachi compite con “música de cabina”
  • no se separan cuentas
  • muy concurrido

Seguimos el trayecto en un atardecer que afectaba, dada la cada vez menos luz, la posibilidad de mapear al caminar; pero “sacrificadamente” hicimos todo lo que estaba en nuestras manos para no frenar el mapeo. Caminamos por el puro centro de Chepe y llegamos al Faro. Una vieja edificación de tres pisos con vista al sur de San José. Ya cuan lejos logre ver unx desde este faro dependerá, entre otras cosas, de cuanto se enfieste en el lugar. Cada piso es un ambiente; de hecho, el tercer piso es para fiestas y no está abierto normalmente. Abajo estaba lleno por lo que fuimos al segundo piso, que además tenía un rock/pop ochentero apreciado por la mayoría del grupo. Las ventanas abiertas generaban un nivel de ventolero que, de querer enviajarse, podria referir al faro y ambientarse unx quien sabe donde.

  • buena atención
  • separan cuentas
  • aceptan tarjetas
  • distintos ambientes
  • buena música

Ya enrumbados (más de rumba que de rumbo) bajamos una cuadra y llegamos a La Bohemia. Cantina de tradición hasta para algunxs de lxs miembrxs del grupo mapeante.

  • fácil sentirse bienvenidx en el lugar
  • fuerte conexión entre los clientes habituales
  • hasta nos regalaron del queque de un cumpleaños que estaban celebrando
  • pocas bocas
  • separan cuentas
  • aceptan tarjetas

Ya para esas horas de la noche, el trayecto se hacía inmapeable pero la intención sacrificada no se acababa. Fuimos a chequear otro par de lugares que podríamos incluir en futuras misiones. Wongs es un restaurante más que una cantina y en lugar de bocas fueron dumplings, generando un momento de excepción en la ruta. Terminamos en la madrugada del día siguiente en Area City celebrando ya un cumpleaños de alguien de nuestro grupo mapeante. Gran salida que, de seguir con ese espíritu festivo, puede generar muchxs sacrificadxs voluntarixs futurxs que nos permitan conocer más de esas cantinas tradicionales que aún quedan en San José.

La embajada de Mapillary en Costa Rica

Minar ethereum

Requerimientos

  • Minador
  • Ubuntu Server 16.04
  • Opcional(tarjeta de video)
  • Python y python-twisted
  • Ethereum
  • cpp-ethereum

NOTA: Se considera Ubuntu Server, en caso de Ubuntu Desktop algunos requerimientos ya vienen instalados en el sistema.

Instalación

  1. Instalar ubuntu 16.04

  2. Instalar python y python-wisted

sudo apt-get install python
sudo apt-get install python-twisted
  1. Un vez que se tiene instalado el sistema operativo, activar el ppa de ethereum
sudo add-apt-repository ppa:ethereum/ethereum
sudo add-apt-repository ppa:ethereum/ethereum-qt
sudo add-apt-repository ppa:ethereum/ethereum-dev
sudo apt-get update
  1. Instalar ethereum
sudo apt-get install ethereum
  1. Instalar cpp-ethereum
sudo apt-get install cpp-ethereum
  1. Clonar el repositorio de eth-proxy.

  2. Crear un wallet con geth o parity.

  3. Instalar los drivers de vídeo, en el caso de usar una tarjeta de vídeo.

  4. Modificar el archivo de configuración de eth-proxy para usar el wallet.

  5. En el directorio eth-proxy, ejecutar eth-proxy.py

sudo python eth-proxy/eth-proxy.py
  1. Ejecutar ethminer apuntando a localhost
ethminer -F http://127.0.0.1:8080/minador -G

NOTA: La opción -G indica a ethminer que utilice GPU para minar, en caso de no contar con GPU utilice --allow-opencl-cpu.

Referencias

https://github.com/paritytech/parity https://github.com/Atrides/eth-proxy https://launchpad.net/~ethereum/+archive/ubuntu/ethereum http://ethdocs.org/en/latest/ethereum-clients/cpp-ethereum/installing-binaries/linux-ubuntu-ppa.html

Crowdtesting with the Ubuntu community: the case of IPFS

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at snapcraft.io.

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield

<3

If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Maperespeis #2: Volcán Poás

El domingo pasado fuimos a hacer mapas libres al Volcán Poás.

Esta es la segunda excursión geek del JaquerEspéis. De la primera aprendimos que había que esperar al verano porque con tormenta no se puede mapear. Y el día fue perfecto. No sólo estuvo soleado, sino que el cráter estaba totalmente despejado y así pudimos agregar un nuevo lugar al tour virtual de Costa Rica.

Además, esta vez llegamos mucho mejor preparados, con varios teléfonos con mapillary, osmand y OSMTracker, una cámara 360, un GPS Garmin, un dron y hasta una libreta y dos biólogos.

La procesión del MaperEspeis

Así funciona el asunto. Todos y todas con el GPS del teléfono activado esperamos a que el teléfono encuentre la ubicación. Después cada persona usa la aplicación que prefiere para recolectar datos: fotos, audios, videos, notas de texto, trazas, anotaciones en la libreta...

Luego, en nuestras respectivas casas, subimos, publicamos y compartimos todos los datos recolectados. Estos nos sirven para mejorar los mapas libres de OpenStreetMap. Agregamos desde cosas tan sencillas como la ubicación de un basurero hasta cosas tan importantes como qué tan accesible es el lugar para una persona en silla de ruedas, junto con la ubicación de todos estos accesos o las partes en las que faltan. Cada persona mejora el mapa un poquito, en la zona que conoce o por la que pasó. Con más de 3 millones de usuarios, OpenStreetMap es el mejor mapa del mundo que existe; y es de particular importancia en zonas como la nuestra, que tienen poco potencial económico para las megacorporaciones que hacen y venden mapas cerrados robando datos privados a sus usuarios.

Como los mapas que hacemos son libres, lo que sigue no tiene límites. Hay grupos trabajando en reconstrucción de modelos tridimensionales a partir de las fotos, identificación e interpretación de señales y rótulos, aplicaciones que calculan la ruta óptima para llegar a cualquier lugar usando cualquier combinación de medios de transporte, aplicaciones para asistir en la toma de decisiones al diseñar el futuro de una ciudad, y muchas otras cosas más. Todo basado en conocimiento compartido y comunidad.

La imagen de arriba es el tour virtual en Mapillary. Como lo grabamos con la cámara 360, pueden hacer clic y arrastrar con el mouse para ver todos los ángulos. También pueden hacer clic arriba, en el botón de reproducir para seguir el camino que tomamos. O pueden hacer clic en cualquier punto verde en el mapa para seguir su propio camino.

Muchas gracias a todos y todas por apuntarse a mapear, en especial a Denisse y Charles por servirnos de guías y llenar el paseo de datos interesantes sobre la flora, fauna, geología e importancia histórica del Poás.

Miembros del MaperEspeis (Aquí más fotos y videos)

El próximo maperespeis será el 12 de marzo.

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.