Why I’m writing about hackers and parenting in Spanish

Earlier this month I went live with a self-published e-book in Spanish titled Cómo criar un hacker: how to raise a hacker.

This book is aimed at parents who are not IT professionals but wish to understand the hacker ethos and a key set of things they can do to build that ethos that is more than just teaching kids to code.

Sure, in the book I say it’s OK if your child isn’t interested in coding, I mention 2600 and Club Mate and I quote ESR profusely (largely for his authority on this topic, and not because I validate or agree with every position of his) but I hope all of that isn’t controversial because it isn’t at the core of the motivations for the book.

Instead, I challenge the inconsistent and unjustified battle against screens, I highlight the relevance of open source and I stake a claim for hackers on note taking and humor as a tool for building character and I do this all under the assumption that digital natives (whatever that means) aren’t set for excellence in a generation where every world leader is also a digital native: only hackers are.

Since this is a short book on parenting, I evidently don’t go into building actual infosec skills at any significant depth. I don’t even discuss the tactics of screen management, covered eloquently and empathetically in books like Screenwise.

In fact, it was by reading reviews on similar work that I found a huge gap in Spanish-speaking markets on this and decided to start there (plus, self-publishing in English would probably have taken me longer as I’m not a native English speaker)

I’m ultimately neither a family counselor, not an expert on parenting. I was just raised a hacker by folks who didn’t even know what that was. Feedback is always welcome. (Cómo criar un hacker is available on Amazon and Smashwords)

Perspective at //build

Recently I had the opportunity to share stage with some brilliant internal and external colleagues advancing open source in the cloud at //build, Microsoft’s developer conference in San Francisco. Beyond having been able to talk to about 400 attendees about how we’re approaching open source in the cloud, how customers are building open source applications in Azure and much more, speaking at //build had a very special meaning for me.

Before joining Microsoft, I didn’t have a lot of exposure to the Microsoft developer ecosystem, the Microsoft subsidiaries themselves or the employees working there. I was focused in a number of open source projects such as Canaima (Venezuela’s national distro) a number of communities and expanding a small open source system integrator in the region.

Most of my interaction with Microsoft was limited to public debates, at industry events or in Congress, or to the ISO/IEC 29500 discussion back in the days (both of which I’ve covered in this blog, in Spanish) However, around 2009 or so, the company I was a CTO for and Microsoft decided to create an Open Source Interoperability Lab in Venezuela. The idea was to document common hybrid technology use cases (such as Samba-based DCs in Windows environments, or PHP and ASP.NET communicating via ESB) and transfer that knowledge to customers.

As a result of that effort, I ended up being invited to and participating in PDC09 in Los Angeles. PDC was the precursor of //build, a yearly conference aimed at Microsoft-centric developers. There are 3 things I remember clearly from PDC09: one, was the “convertible tablet PC” they offered attendees (running Windows 7 bits that rapidly became Debian bits), the second one was the PHP SDK for Azure, and the preview access to that new “cloud” thingy, and the third one was an open source roundtable led by Miguel de Icaza that mainly talked about governance and CodePlex.

While I didn’t know it back then, a lot of the things discussed in that roundtable influenced my decision, about a year later, to join Microsoft and work in open source strategy; a journey that brought me to Azure in less than 5 years. But I digress, and that whole story deserves another post.

Maybe some of the attendees then foresaw that Microsoft would end up acquiring Xamarin, or that attention would be put in non-CodePlex initiatives, like GitHub. What I really didn’t expect was that all of that new reality would converge into a PDC-like event, less than 10 years after. This year at //build it did, and then some.

For me, speaking at //build was a humbling opportunity to reconcile the many worlds increasingly pulled together by the force of open source. From the announcements to the content and all other metasignals at the conference, it was incredibly exciting to see this transformation manifesting itself within Microsoft’s developer community.

It highlights the importance of leaving no one behind when we explore new paradigms and technologies in the cloud, and how every individual in the open source community can exert change in this industry.

Why we go to LinuxFest Northwest

For the second year in a row since I moved to Redmond, I’ll be joining the Microsoft crew sponsoring and attending LinuxFest Northwest in Bellingham, Washington. This is one of the largest, if not the largest Linux & open source event in the region and draws large crowds of smart geeks from Canada, the United States and other countries, as well as corporate sponsors like us.

One of the questions I get the most is why does Microsoft sponsors and participates this event? Microsoft has been sponsoring and participating in many open source conferences, projects and events in many parts of the world but some people are wondering why a non-corporate, pure Linux event, and some others are naturally skeptical about it.

I don’t think there’s a single reason why we rally to convince our bosses to do it, but we have been trying to do more closer to home, when it comes to open source. There is a vibrant Linux and open source ecosystem in Redmond, the Puget Sound area and the Pacific Northwest and while we have been very active in Europe and in the Bay Area, we haven’t done a good job of connecting with the people closer to home.

For example, I recently had the fantastic opportunity to help the Pacific Northwest Seismic Network from the University of Washington to run their Ubuntu-based Node.js applications for their “Quake Shake“. I think being able to help with that project or with any other project or conference in any other part of the globe is a good thing – but there’s no distance excuse for Bellingham!

Another great reason is the LFNW community itself. We love the crowd, the lively discussions, the sharing and learning spirit. And as long as we are welcome by the community we’ll continue to seek opportunities to connect with it. Plus, this is a really cool conference. This year, I’m cutting my vacations to attend the event. A coworker is skipping church duty to help. We have heard from many engineers and program managers that they will be attending and want to carpool and staff the booth. And my friend has been investing all this time in logistic ensuring we are having a meaningful presence.

The community invites some of the sponsors to bring unique content that is relevant to the participants. Last year I had the opportunity to demo a Raspberry Pi device connected to Office via Azure. Most people in the room didn’t know Office runs in a browser, or that Azure could run Linux. But they listened and they thought it was cool. Some of them are now partners, helping customers do more with open source in Azure.

This year, I want to bring more Debian to this event because I have been working a lot inside of Microsoft to get more people up to speed with Debian-based development and we have serious community momentum around Debian in Azure. In true Microsoft tradition, we will have a cake to celebrate the arrival of Debian 8. I’ll have in mind all of those friends in the Debian community with whom I’ve been working with for years to make sure we don’t drop the ball when it comes to responding to what our customers, partners and the community want when it comes to Debian.

And, hopefully, next year we’ll be back again in Bellingham for LinuxFest Northwest 2016!

Thoughts on growth and open source services

For many years I was infatuated with the idea of creating value out of open source professional services. To a certain extent, this is a function of when, where and how I was exposed to open source. Even today, after acknowledging the challenges of this model (the hard way) I find myself spending time modelling what needs to change in order to innovate it.

While today there are statistically no skeptics of the tremendous impact that open source software has had in and beyond the IT industry, thinking prevails that the open source opportunity doesn’t lay on professional services.

It’s commonly accepted that only a handful of players have found success in this model. In fact, some would argue that it can only be one that exhausts it for everybody else. Media commentators shun on rising startups whose business model smells too much of support and services.

As Ben Werdmüller recently wrote (motivating me to write this article) those services are not recurring and not scalable. And there’s also proof in the market that well designed, talented and recognized organizations eventually fail in their efforts to seize the open source consulting business.

Back in 2008, after 5 years selling open source services either as a freelancer or in small firms, I was invited to lead technical strategy for an open source focused system integrator in Venezuela. The organization had recently scored a support agreement with a large multinational hardware vendor for a subset of their customers’ Linux needs, and they were looking for a portfolio and an attractive environment for talent and for growth.

I spent the next 3 years building a team of 50+ in several countries in Latin America, shipping open source products and solutions and managing large consulting projects for customers in the public and private sector. That support agreement became 3 partnership agreements with large IT multinationals. Yet with all the impact, the challenges of dealing with the subtleties and complexities of the open source professional services challenge remained unaddressed.

There were numerous learnings I grabbed from that experience, ranging from managing a team of talented professionals who went on to highly successful roles in Europe and the Americas, to the art of marketing something as bland and commoditized as open source consulting.

Among the fun learnings: with a highly mobile talent pool in multiple countries we managed our daily operations via IRC. We also built a lean-and-mean sales process led by the delivery teams, not sales, embraced document and knowledge management and invested in the communities and ecosystem that help open source be successful.

But I digress. Portfolio-wise, we had organized our offering in three core areas (infrastructure, applications and databases) and a number of incubation areas that gave us a unique competitive advantage such as knowledge management and end-user experience (we focused a lot on Linux in the desktop) or business intelligence and unified communications. All with open source, all with Linux.

Yet market disruptions, such as government policy in an economy where public sector concentrates an overwhelming amount of spending power, contributed to mask the unaddressed. Since 2004, there was a stated pro-open source policy in the public sector which evolved into a number of unstated policies trickling to public and private sector alike.

When this policy was introduced there was a small talent pool to cover the complex needs of a public sector that sprawled beyond the vertical with plenty of Oil & Gas, Financial Services, Manufacturing and other needs. Furthermore, virtually no relevant foreign organization took advantage of this opportunity due to general market conditions, a difference between how similar policies were rolled out in, for example, Ecuador (where the US dollar is the local currency)

Therefore, supply and demand reality made margin management, a critical discipline in the services business, an afterthought. Plus, the depth and quality of our technical results was a catalyst for business opportunities so marketing wasn’t really in the picture. We were a go-to open source consulting company and we got away with selling bland OpenLDAP clusters and Asterisk IPBX as if they were actual products, repeatable and scalable.

And in exploring other models we found support was something we actually enjoyed: we were really proactive and fanatical about it and generally speaking never had to sell a support agreement. In the training side of things we managed to set consistency standards across courses and deployments but all accrued to that non-recurring base of services, to that dreaded hourly rate. So they were never differentiated sources of growth as it always converged in a consulting project.

At some stage we did invest in a products team that explored all the right things which years later hit the market (agile embedded with general purpose Linux OS, SaaS and cloud-powered IPBXs, analytics and insights, etc.) but the reality is that our operation corpora was built on a professional services foundation which made it unrealistic to detach. We tried using a different brand for our product labs, but the talent we had attracted and developed thrived in services.

I still see the boundaries of a VAR, an ISV and an SI as pretty artificial in the open source world, just as I find it less relevant to look at the boundaries of development and IT professionals with an open source hat on. Of course the business models are different, some are based in volume and depend on marketing and channel while others are based in margin and depend on trust and references. This mix is not different from what we’re seeing today in open source startup IPOs.

Today I don’t struggle to articulate a value proposition or find demand for the open source capabilities I’m selling. I’m struggling to find the right partner to help me scale. And I refuse to believe I can only go to a global SI or a well-known Bay Area ISV for those needs, when I have lots of VARs, SIs and ultimately great people in local markets who can land meaningful solutions. Yet I’m wary about putting all the eggs in the basket of building value out of open source professional services.

We’re now living interesting times where the successful players in this space are crowd sourcing services growth via channel. This is a fascinating move from an open source support and services behemoth and has a lot of potential if it can connect the local talent with consistency that accrues to growth.

In the meantime, common sense will still indicate that entering the market to sell non-repeatable open source professional services can be highly rewarding in developing people, acquiring and developing know-how and making an impact. It can even help reduce the consumption gap for a complex product and help build market share. It just doesn’t seem to be a high-growth strategy for most people out there.

Rebasing CoreOS for ephemeral cloud storage

The convenience and economy of cloud storage is indisputable, but cloud storage also presents an I/O performance challenge. For example, applications that rely too heavily on filesystem semantics and/or shared storage generally need to be rearchitected or at least have their performance reassessed when deployed in public cloud platforms.

Some of the most resilient cloud-based architectures out there minimize disk persistence across most of the solution components and try to consume either tightly engineered managed services (for databases, for examples) or persist in a very specific part of the application. This reality is more evident in container-based architectures, despite many methods to cooperate with the host operating system to provide cross-host volume functionality (i.e., volumes)

Like other public cloud vendors, Azure presents an ephemeral disk to all virtual machines. This device is generally /dev/sdb1 in Linux systems, and is mounted either by the Azure Linux agent or cloud-init in /mnt or /mnt/resource. This is an SSD device local to the rack where the VM is running so it is very convenient to use this device for any application that requires non-permanent persistence with higher IOPS. Users of MySQL, PostgreSQL and other servers regularly use this method for, say, batch jobs.

Today, you can roll out Docker containers in Azure via Ubuntu VMs (the azure-cli and walinuxagent components will set it up for you) or via CoreOS. But a seasoned Ubuntu sysadmin will find that simply moving or symlinking /var/lib/docker to /mnt/resource in a CoreOS instance and restarting Docker won’t cut it to run the containers in a higher IOPS disk. This article is designed to help you do that by explaining a few key concepts that are different in CoreOS.

First of all, in CoreOS stable Docker runs containers on btrfs. /dev/sdb1 is normally formatted with ext4, so you’ll need to unmount it (sudo umount /mnt/resource) and reformat it with btrfs (sudo mkfs.btrfs /dev/sdb1). You could also change Docker’s behaviour so it uses ext4, but it requires more systemd intervention.

Once this disk is formatted with btrfs, you need to tell CoreOS it should use it as /var/lib/docker. You accomplish this by creating a unit that runs before docker.service. This unit can be passed as custom data to the azure-cli agent or, if you have SSH access to your CoreOS instance, by dropping /etc/systemd/system/var-lib-docker.mount (file name needs to match the mountpoint) with the following:

[Unit]
Description=Mount ephemeral to /var/lib/docker
Before=docker.service
[Mount]
What=/dev/sdb1
Where=/var/lib/docker
Type=btrfs

After systemd reloads the unit (for example, by issuing a sudo systemctl daemon-reload) the next time you start Docker, this unit should be called and /dev/sdb1 should be mounted in /var/lib/docker. Try it with sudo systemctl start docker. You can also start var-lib-docker.mount independently. Remember, there’s no service in CoreOS and /etc is largely irrelevant thanks to systemd. If you wanted to use ext4, you’d also have to replace the Docker service unit with your own.

This is a simple way to rebase your entire CoreOS Docker service to an ephemeral mount without using volumes nor changing how prebaked containers write to disk (CoreOS describes something similar for EBS) Just extrapolate this to, say, your striped LVM, RAID 0 or RAID10 for higher IOPS and persistence across reboots. And, while not meant for benchmarking, here’s the difference between the out-of-the-box /var/lib/docker vs. the ephemeral-based one:

# In OS disk

--- . ( ) ioping statistics ---
20 requests completed in 19.4 s, 88 iops, 353.0 KiB/s
min/avg/max/mdev = 550 us / 11.3 ms / 36.4 ms / 8.8 ms

# In ephemeral disk

--- . ( ) ioping statistics ---
15 requests completed in 14.5 s, 1.6 k iops, 6.4 MiB/s
min/avg/max/mdev = 532 us / 614 us / 682 us / 38 us