General

Why we go to LinuxFest Northwest

For the second year in a row since I moved to Redmond, I’ll be joining the Microsoft crew sponsoring and attending LinuxFest Northwest in Bellingham, Washington. This is one of the largest, if not the largest Linux & open source event in the region and draws large crowds of smart geeks from Canada, the United States and other countries, as well as corporate sponsors like us.

One of the questions I get the most is why does Microsoft sponsors and participates this event? Microsoft has been sponsoring and participating in many open source conferences, projects and events in many parts of the world but some people are wondering why a non-corporate, pure Linux event, and some others are naturally skeptical about it.

I don’t think there’s a single reason why we rally to convince our bosses to do it, but we have been trying to do more closer to home, when it comes to open source. There is a vibrant Linux and open source ecosystem in Redmond, the Puget Sound area and the Pacific Northwest and while we have been very active in Europe and in the Bay Area, we haven’t done a good job of connecting with the people closer to home.

For example, I recently had the fantastic opportunity to help the Pacific Northwest Seismic Network from the University of Washington to run their Ubuntu-based Node.js applications for their “Quake Shake“. I think being able to help with that project or with any other project or conference in any other part of the globe is a good thing – but there’s no distance excuse for Bellingham!

Another great reason is the LFNW community itself. We love the crowd, the lively discussions, the sharing and learning spirit. And as long as we are welcome by the community we’ll continue to seek opportunities to connect with it. Plus, this is a really cool conference. This year, I’m cutting my vacations to attend the event. A coworker is skipping church duty to help. We have heard from many engineers and program managers that they will be attending and want to carpool and staff the booth. And my friend has been investing all this time in logistic ensuring we are having a meaningful presence.

The community invites some of the sponsors to bring unique content that is relevant to the participants. Last year I had the opportunity to demo a Raspberry Pi device connected to Office via Azure. Most people in the room didn’t know Office runs in a browser, or that Azure could run Linux. But they listened and they thought it was cool. Some of them are now partners, helping customers do more with open source in Azure.

This year, I want to bring more Debian to this event because I have been working a lot inside of Microsoft to get more people up to speed with Debian-based development and we have serious community momentum around Debian in Azure. In true Microsoft tradition, we will have a cake to celebrate the arrival of Debian 8. I’ll have in mind all of those friends in the Debian community with whom I’ve been working with for years to make sure we don’t drop the ball when it comes to responding to what our customers, partners and the community want when it comes to Debian.

And, hopefully, next year we’ll be back again in Bellingham for LinuxFest Northwest 2016!

Standard
General

Instant Debian – Build a Web Server is now available

During the past few months I worked on a book project with Packt to make it easier for people new to Debian to leverage it for Web-based applications. I’m happy to announce that Instant Debian – Build a Web Server is now available. Although it is not my first project with Packt (I’ve reviewed some nginx books before) it is the first one that I’m authoring, and I’m already working on some new projects.

I had the fortune of having a senior leader that I deeply respect from the Debian Project as my technical reviewer, and the full support of the Packt team. The motivation for the book is simple: in a world of elastic clouds, simpler NoSQL’s and explosive growth, developers, sysadmins and business leaders are less concerned about the operating system and more about their time-to-market. In this book, I use my 10-year experience with Debian to provide a simpler path to a solid Web platform.

In fact, all of my immediate writing projects are related to most of those low-hanging fruits that add incredible value to business decision makers in the broader technology conversations of todays: elasticity, information security and privacy and performance. In a way, this book answers the “why” I get from the business side when explaining technical decisions related to Debian: why use noexec in /tmp, why use codenames in sources.list for APT, why use sudo, etc. – only with a goal: reduce time-to-market.

This is a beginner’s book. If you haven’t heard about Debian before, and would like to leverage virtualization or cloud technologies to create a “template” for your Web deployment, Instant Debian – Build a Web Server will provide exactly that, while exploring the rationales and laying a solid foundation for you to continue exploring the system.

Standard
General

Carga rápida y masiva de Memcache para Nginx

Hace años escribí sobre como uso Nginx como proxy de Apache en algunas instalaciones. En esa arquitectura contemplo Memcache. La configuración es muy sencilla, basta agregar a la sección location que queramos cachear lo siguiente:

set $memcached_key $uri;
memcached_pass 127.0.0.1:11211;
error_page 404 @fallback;

Y agregar el location @fallback correspondiente:

location @fallback {
proxy_pass http://localhost:8000;
}

El único problema, como algunas personas que han usado Nginx con Memcache, es que alguien tiene que llenar Memcache con objetos para que Nginx pueda leerlos.

Usualmente, los desarrolladores de la aplicación usarán las librerías del lenguaje de programación para acceder a Memcache y cargar allí algunos objetos. Esto funciona, y es como la mayoría de la gente implementa este escenario. Sin embargo, si uno quiere cargar varios archivos de forma rápida a Memcache, no hay muchas herramientas sencillas y fácilmente disponibles.

Por ejemplo, hace dos meses en la wiki de Nginx alguien publicó un método para precargar memcache con Python. Es un enfoque interesante, pero complicado de mantener y decididamente experimental.

Sin embargo, memcache ya incluye un cliente llamado memccp que permite cargar archivos en Memcache. El problema es que este cliente no permite definir la llave con la que el objeto se almacena en Memcache. Esa llave es $uri, por ejemplo algo como /wp-content/plugins/akismet/akismet.gif.

Cuando Nginx encuentra un cliente que hace GET a este archivo, lo sirve desde Memcache, lo que en este escenario nos ahorra abrir una conexión TCP a localhost, que Apache atienda y responda una petición, y potencialmente I/O de disco.

Este parche a libmemcached permite que se defina una clave con –key, lo que facilita precargar archivos como imágenes o CSS en Memcache. Su uso es sencillo y se puede invocar desde un shell script (probado en dash)

#!/bin/sh
BASE=”/var/www/mysite”
for file in `\
find $BASE -type f \
-name ‘*.jpg’ -or \
-name ‘*.gif’ -or \
-name ‘*.png’ \
| sed “s#$BASE##”`
do
echo “Adding $file to memcached…”
sudo memccp –key=$file –servers=localhost $BASE$file
done

Entre otros escenarios que puedes activar en este caso, está el poder almacenar archivos para distintos hosts virtuales. En este caso sugiero que configures $memcached_key para usar $http_host y $uri, y añadas una variable de prefijo a tu script. También puedes correr otro memcache, si en realidad lo necesitas. memccp tiene otros problemas, por ejemplo no maneja la codificación de caracteres muy bien. Pero para archivos binarios, usualmente estáticos, ahorra bastante trabajo.

El repositorio en GitHub es un paquete fuente de Debian. Si tienes las dependencias (sudo apt-get build-dep libmemcached-tools) puedes construir el paquete (dpkg-buildpackage -b) e instalar libmemcached-tools que contiene memccp.

Este escenario es uno de los que describo en mi próximo libro rápido sobre Debian para aplicaciones Web, que está actualmente en fase de edición.

Standard
General

Reviviendo el Neo Freerunner en Campus Party Quito 2011

Hace más de 3 años puse mis manos en un Neo Freerunner, la segunda versión al público del teléfono celular del proyecto OpenMoko. Luego me pasé a Android (un HTC Magic) durante bastante tiempo y lo dejé por otros smartphones. Pero el Freerunner siguió ahí, desplazado por tecnología más nueva y por software más completo.

Hasta hace una semana.

Tuve la oportunidad de ser invitado a dar dos presentaciones en el escenario de Innovación de la Campus Party Quito 2011, y una de ellas era sobre clusters. Para lxs que me conocen, y lxs asiduxs al blog, la clusterización de servicios (y no la clusterización de cómputo científico) es un tema específico de expertise que me apasiona desde hace 5 años.

En mi paso por la Electrificación del Caroní, primero Corporación Venezolana de Guayana y ahora Corporación Eléctrica Nacional, una de las hidroeléctricas más importantes del mundo, le asignaron a mi equipo la responsabilidad de construir un cluster para correo electrónico con almacenamiento compartido.

El cluster, que se encuentra en producción, consta de servidores Itanium geográficamente distribuidos (EDELCA tiene una red que va desde Brasil hasta Colombia) así como balanceadores de carga en alta disponibilidad, un servicio de directorio propio, etc.

También he hecho clusters con máquinas virtuales anteriormente. Por ejemplo, hace algunos meses en el TechDay Quito, o en algunos screencasts de mi trabajo.

Pero para Campus Party Quito pensé que la gente necesitaba ver algo más cool, así que preparé un cluster sencillo, para un servicio sobre TCP (HTTP) usando HTML estático y los siguientes miembros del cluster:

1. Dos (2) máquinas virtuales Debian GNU/Linux 6.0 corriendo sobre Hyper-V en Windows Server 2008 R2, a su vez booteado desde un disco duro USB exteo (sí, se puede correr Windows en una memoria USB) con nginx como servidor Web

2. Una (1) máquina virtual Windows Server 2008 R2 con IIS corriendo PHP (específicamente Drupal) también sobre Hyper-V

3. Una (1) máquina física con Debian GNU/Linux Sid para las pruebas del cluster

4. Un (1) Openmoko Neo Freerunner corriendo Hackable:1 (distribución basada en Debian) y a su vez con lighttpd como servidor Web y conectado al router del cluster vía Wi-Fi

Con respecto a Hackable::1, la instalación fue straightforward. No usé microSD, sino la flash del teléfono. Tan solo necesité dfu-util (disponible en Debian) y las dos imagenes (keel y root filesystem) para flashear el dispositivo. Luego configuré USB networking para entrar por SSH al teléfono e instalar lighttpd y configurar la IP. Todos los detalles en el link de arriba.

El cluster está inspirado en la metodología Ultramonkey, específicamente en el escenario de HA+LB streamlined, donde se utiliza una sola capa de equipos Linux entregando HA, LB y potencialmente también el servicio. Esto reduce costos, pero requiere una configuración un poco cumbersome de ARP en las máquinas.

La verdad, la prueba salió bastante bien. El cluster respondió muy bien a los casos de prueba, que fueron:

1. Remoción en caliente de uno de los balanceadores de carga en alta disponibilidad

2. Remoción en caliente de uno de los equipos prestadores del servicio

3. Inclusión en caliente del balanceador de carga activo inicial (del caso 1) y restitución de la alta disponibilidad

Incluso, al final, el Freerunner se quedó sin batería (recuerden que era un miembro del cluster conectado por Wi-Fi) y el cluster lo manejó sin problemas.

También tuvimos la oportunidad de revisar diversas ventajas y retos de hacer un cluster, e identificar los SPOF (puntos únicos de falla) del cluster de demostración. Las láminas están publicadas aquí, y el video aquí. Gracias a todos/as los que participaron y se interesaron por el tema. Siempre estaré a la orden para cualquier inquietud.

Standard
General

Stumbling with Mono and C# applications in Linux

In the past, I’ve helped customers run native Windows applications using emulation (wine) and/or virtualization, and in few cases, native support for C# by open source projects such as Mono. But, as it’s been said, there are great benefits of having source code, and I recently worked around an issue in Mono by recompiling a whole .NET project in Debian.

So here’s a brief guide reviewing the issues that I found and how I solved each one of them leading to a natively-compiled C# application that runs seamlessly in Debian and Windows, and please remember I’m not a C#/.NET/Mono developer:

  • Missing assemblies: sometimes gmcs (which is the Mono compiler) will complain about missing assemblies, that is, libraries that are requested by the C# source code (files that end with .cs) hopefully because some methods are being used in that specific file. I know they’re, but I still haven’t found a .NET class that is not supported in Mono. So, first and foremost, you need to actually have that library installed in your system, if you’re using a Debian based system run aptitude search libmono.* or the packages.debian.org site to find which package has the library file (ends in .dll) and then, when you’re calling gmcs, remember to use the –r: flag, for example –r:System.Drawing.dll
  • Missing resource indicators: apparently .NET projects consume stuff from resource files which also seem to be XML files. Since I don’t know how to handle this situation, I resorted to the fine MonoDevelop IDE, since I had a .csproj file (a Visual Studio Project file) in place which hopefully could join the dots. So just use MonoDevelop (you need to install it separately) and then open the .csproj file and rebuild the entire project (F8) which in my case just left 2 errors to go
  • Issues with configuration management: it seems that the .NET Framework has some classes that help with application configuration via generic XML configuration files, and the project that I was trying to compile used that indeed, so there were some tags in the configuration file that weren’t properly handled by Mono, e.g., the default system proxy configuration parameter, so I just deleted it from the XML. Be also advised that you do need to have the .config XML file in place together with the resulting .exe file if your application access configuration parameters in runtime.
  • SSL- and TLS certificate-related issues: finally, if your application uses an SSL or TLS Web Service (or any other SSL/TLS application FWIW) you might get some errors like “authorization or decryption failed” which are basically related to non-existent certificate handling in the application. The easiest way to solve it is to download and import the CA Certificate Files from Mozilla and also download the certificate for the site you’re trying to access; you can run (as the user that will run the application):

That said, I had a great time fiddling with this C# application and Mono on my Debian Sid laptop. I was able to properly compile without errors nor framework-related waings, using an IDE such as MonoDevelop, and have a full-featured native binary that’s interoperable.

Standard
General

Follow-up on Debian Sid on the Acer Aspire 1420P

Update: Using Debian Sid, the video acceleration is not working correctly. glxgears shows a black screen with a glimpse of the gears when I move the window, but GLX-enabled applications aren’t showing up even correctly. Changing the AccelMethod doesn’t improve the situation. One of the readers asked how did I calibrate the screen. I added the information below.

Last year I got my hands on an Acer Aspire 1420P. It’s now running Debian GNU/Linux unstable, and I’m transitioning from my older Thinkpad T400. Of course, it won’t install and work flawlessly out of the box, so here are my notes.

Etheet

Use a recent keel, say, 2.6.32 (2.6.32-3-686 in Debian, for example), since the Atheros GigaEtheet card won’t work in older keels without patching (e.g., you get a link but can’t actually send packets). My lspci reports the Etheet controller as Attansic Technology Corp. Device 1063 (1969:1063) which uses the atl1c keel driver.

Wi-Fi

The Wi-Fi card, an Intel WiFi Link 100, which my pci-utils report as 8086:0083, needs a recent firmware-iwlwifi (or a recent firmware for that card, if you don’t use the package) which I also took from sid.

Tablet touchscreen

It works with the evtouch driver, but you’ll need to apply a patch to xf86-input-evtouch (0.8.8 is both in sid and lucid) and calibrate the tablet. It seems like the screen rotation does not generate an ACPI event, but if you attach a button with the xrandr rotation, you don’t need nothing else for evtouch to catch up. Screen’s multitouch, but software doesn’t support it yet.

Calibration: the xf86-input-evtouch package includes a calibration utility which presents users with crosshairs which you have to click in order. This program should output the minimum and maximum parameters for the evtouch driver in xorg.conf (actually in an out.txt which you should manually merge), but in my case, it didn’t. On the code for ev_calibrate, it should output the information to /etc/evtouch/config, but not in a xorg.conf compatible file.

So, just copy the values for min[x,y] and max[x,y] in a correspondent InputDevice section on xorg.conf as follows:

Section "InputDevice"  Identifier "Touchscreen"  Driver "evtouch"  Option "Device" "/dev/input/event1"  Option "MinX" "0"  Option "MinY" "0"  Option "MaxX" "3825"  Option "MaxY" "3825"  Option "ReportingMode" "Raw"  Option "SendCoreEvents" "On"  ...EndSection

Things that work

Integrated Intel Mobile graphics chipset, Huawei integrated HSDPA modem (shows up as ttyUSBn), audio, wireless provided you have firmware, webcam, ACPI events for almost everything (lid rotation doesn’t seem to work), function keys… it’s all working nicely. This model has a Core 2 Duo U2300 processor and 2 GB. RAM.

Standard