Øredev 2013 – Day 1 (the first half)

2013-11-06 13.01.18

This is my third year in a row attending the Øredev conference. The last two years were totally fantastic so I was really looking forward to this year’s edition. This year I travelled with a large group from Tradera (my new job)  and me switching jobs made some sessions more interesting and others less so. For example, mobile stuff is much more important to me now.

I’m not feeling as ambitious as in previous years with my blogging so check out my colleague, Daniel Saidi’s blog for more detailed reviews of sessions. I will mostly focus on my favourite sessions and try and examine them in more detail rather than cover all the sessions I attended. With that said, let’s go!

Scaling Mobile Development at Spotify – Per Eckerdal and Mattias Björnheden

This session was interesting for me because it ties in so closely with the work I am doing at Tradera right now. Per and Mattias gave us a before and after story of how they changed the technical and organizational structure of their mobile app development. The story starts with two mobile teams, 30 developers in the iOS team and 20+ developers in the Android team.

This didn’t work too well for them so they changed their organizational structure to cross-functional teams that each own a feature in the Spotify app e.g. My Music or social. These teams are responsible for the development of that feature both on the backend and in the mobile and desktops apps as well as in their web player. These features are loosely coupled and for the most their only connection to another feature is to call it to open it. Spotify also changed their branching strategy to have one main branch and use toggles/feature flags to control when new features are released.  This allows them to have shorter release cycles (every 3 weeks).

This organizational change required changing their architecture as well. They had a core library which a core team worked on and then specific apis for each platform. So for example the Android team had an api called Orbit that they used to connect to core and the desktop team had an api called Stitch and the iOS team had an api called Core ObjC (I think). This architecture was a bottleneck for them as changes in core (or one of the apis) could only be done by the team that knows that part of the system. This meant waiting for another team to do the change and as Per said: Waiting sucks.

The solution was to cut away layers. The clients now all call to a common core api that is just a thin layer over the feature subsystems. The client code is moved down a layer to either this thin core api layer or into the feature subsystem. They make just one call to fetch data (view aggregation). The idea is that a feature should work pretty much the same in the different clients and that the clients should do as little as possible in the the platform specific code.

This idea of view aggregation is very interesting. We face a similar situation with our web and mobile clients at Tradera. The client makes a call and gets the whole view model (view data) in one clump and then shows that to the user. This means there will be duplication at the api level. Per from Spotify explained that this is a deliberate design choice.  I am working on a web app and if we applied this in our case this would mean making our ASP.NET MVC controllers dumb (we would be using them just to create html) or maybe eliminating them altogether.

Per and Mattias explained more about how they use feature toggles/flags and then showed off their inbuild QA tools in the Spotify apps. They recommended that everyone build these types of QA tools and I for one, will be ripping off some of their ideas in the near future. Especially the login chooser they showed for switching between different test users so that you don’t have to remember test logins or passwords.

Nice session. It was great to hear how another company solved the problem of multiple platforms by both changing their organization as well their architecture. The video is already available on Øredev’s website.

Less is More! – Jon Gyllenswärd and Jimmy Nilsson

“The root cause of getting stuck in a rut is if we are carrying a large codebase” – Jimmy Nilsson

This session was about the story of how Sirius International Insurance rewrote a monolithic codebase into a set of subsystems. Jon is a former colleague so I couldn’t miss this one! Sirius is a reinsurance company, basically a company that insures other insurance companies so that they can spread the risk in the case of large catastrophes. The system in focus here simulates the outcome of these large catastrophes.

The Sirius story contains a similar theme to the one in the Spotify session. It is always interesting to see examples of architectures in real life and on a larger scale than the toy examples used in most tech sessions.

The first interesting idea from this session was building new service balloons (as they called them). Instead of building a new feature into the old monolithic system they built it as isolated service on the side. This worked well for them but it wasn’t enough.

Sirius met Jimmy after watching one of his sessions at Øredev in 2011 (pretty cool that the conference inspired them to make the change) and invited him to help them out. They got the business to buy into the idea of rewriting the system and were given 9 months to rewrite the system and create a new UI as well.

The basic architecture style is that each subsystem owns its own data AND its own UI. This leads to some duplication of both data and of concepts. One of the ways they got around this self-imposed design limitation is to have a shared-data subsystem. However this also owns its UI. So if you want a dropdown list of regions in the UI then you would fetch the whole component from the shared-data subsystem (both the UI and backend parts). Not sure I can apply this to my current project but thought-provoking nonetheless.

They also changed the organization of their teams during this rewrite (just like Spotify did) but in a totally different way. They went back to specialized teams rather than having cross-functional teams e.g. UI and database teams. Their experience is that developers enjoyed this more and that it was worth the overhead of the extra meetings. I think I’ll have to eat lunch with Jon soon and discuss this more as it is a totally different approach to what we use at Tradera.

Jimmy presents some of the interesting solutions they came up with during the project like saving the serialized object graphs from their domain model to json files instead of using a database. All part of their keep it simple motto. The video can be found here.

As I want to go to bed now, I’ll write about the rest of day one tomorrow. It was great I saw Woody Zuill’s session on Mob Programming so stay tuned for more.

Find the Hidden Pull Request Info on Github

I review quite a few pull requests on Github and always thought it strange that Github does not show the repository url of the fork on which the pull request came from. I’ve always had to navigate through a few pages to find it. I need this url to add it as a remote locally and pull in the changes so that I can review them.

But I’ve had this vague memory that someone on Twitter mentioned that it is possible to find the url on the pull request page. I went through my Twitter favourites and couldn’t find anything. A while ago I (re)discovered this little Github secret.

I’m writing this down mostly as a reminder for myself. Here it is, click the i:

GithubInfo

When you click this, Github shows all the steps to do the merge manually and *ta-da* the fork’s url as an http url or a git url.

GithubMergeInfo

Build Your Open Source .NET Project On Travis CI

Travis CI is a continuous integration service that lives in the cloud and is free for public Github repositories. When you push a change to your Github repo, Travis CI will automatically detect it and run a build script. It works for all branches and pull requests as well and has some really nice integration with Github. In the screenshots of the Github Pull Requests below you can see the first Pull Request has a Good to merge label and the second one failed.

SafeToMergePRTravis

FailedPRTravis

Travis CI supports loads of languages but not C# and the reason is that Travis CI only supports Linux servers. Although Windows support seems to be on the way.

Being a core committer for FluentMigrator, an OSS .NET project, this is actually just what I was looking for. We have a Teamcity server (thank you Jetbrains and Codebetter) but we don’t have any testing for Mono and Linux. I had seen that the git-tfs project (also .NET and C#) was using Travis CI and thought I’d try and copy their build script. But it was not as simple as that! Here is my guide to getting a .NET project to build on Travis CI.

Sign up for Travis CI

The first step is to sign up on the Travis CI website. You can only sign up for Travis CI via your Github login which makes sense as the service focuses on CI for Github projects only. After signing in you should see your name in the top right corner so click on that to open your profile.

TravisEnableRepo

Enable the Github hook for Travis CI by selecting the appropriate repository (daniellee/FluentMigrator in my case). And that is all you need to do. If the repository contains a file called .travis.yml then it will try and build it. This is triggered after every push to the repo.

XBuild

The second step is to create an MSBuild Xml file that can be run with XBuild. XBuild is the Mono version of MSBuild and uses the same file format. The simplest build script describes which .NET version to build the project in, the platform (x86 or AnyCPU) and the name of the solution file. Here is the MSBuild file for FluentMigrator:

<?xml version="1.0"?>
<Project ToolsVersion="4.0" DefaultTargets="CI" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <Target Name="CI" DependsOnTargets="Build" />

  <Target Name="Build">
    <MSBuild Projects="FluentMigrator (2010).sln" Properties="Configuration=Debug;Platform=x86" />
  </Target>

</Project>

Create A Travis YAML File

The next step is to create a .travis.yml file. A file with the extension .yml is a YAML file, a file format sort of like JSON but without all the curly braces. Check out the documentation for all the options that can be defined in the YAML file. Here is the YAML file for FluentMigrator:

language: c

install:
  - sudo apt-get install mono-devel mono-gmcs nunit-console

script:
  - xbuild CI.proj
  - nunit-console ./src/FluentMigrator.Tests/bin/Debug/FluentMigrator.Tests.dll -exclude Integration,NotWorkingOnMono

This needs a bit of explanation. Travis CI runs on Ubuntu Linux so before running this script .NET/Mono is not installed so we have to install everything before trying to build and test the project.

As C# and .NET are not supported by Travis we set the language to C.

The install part of the script uses Ubuntu’s package manager, apt-get (Chocolatey is the Windows version) to fetch Mono, the Mono C# Compiler (MCS) and the NUnit console runner. We need MCS to be able to compile our C# code. We need the Mono Development Tools (mono-devel) as it contains Mono 4.0, a CLR implementation that works on Linux. And the NUnit console runner is needed to run tests from the command line.

The first task in the script step is to use xbuild to build the MSBuild file. For FluentMigrator this builds the main solution file that contains all the projects.

The second task is to run all the NUnit tests from one of the test projects. I did try and run NUnit via the MSBuild script but gave up as it was much easier to do this way. There is one very important gotcha to note here. All the examples of using the NUnit console runner use a forward slash (/) for the switches even the man pages (Linux help pages) for nunit-console. It took me a while but I eventually noticed that in the NUnit documentation that it mentions that on Linux you should use a hyphen (-) instead and then either a space or colon after the switch. E.g.  I use the exclude switch to exclude some categories of tests like this: –exclude Integration,NotWorkingOnMono

Deploy It!

Now it is time to cross your fingers and do a git push to your repo on Github. I’m sorry but there is no way it is going to work first time unless you have already spent some time building it on Linux. It took me seven attempts to get it to build successfully and another five attempts before I got all the tests to pass.

Potential Pitfalls

Here is a list of all the problems I ran into. Though I’m sure that there are loads of other problems that could occur in your project.

Linux is case sensitive

There is a class named SqliteQuoter in FluentMigrator (with exactly that casing) and I got the following slightly baffling error on my first run:

error CS2001: Source file `Generators/SQLite/SqliteQuoter.cs’ could not be found

When I looked at the actual filename (it looked perfectly correct in Visual Studio) I saw that it had this casing: SQLiteQuoter. As just changing the casing of the filename is not a change in Git you have to explicitly rename the file.

git mv -f SQLiteQuoter.cs SqliteQuoter.cs

The next casing problem was just as baffling. It turned out that in the csproj file a reference to System.Data.SQLite.dll looked like this: System.Data.SQLite.DLL. And that meant that Linux could not match the filenames. I fixed that by manually editing the csproj file and manually changing the case.

I got stuck on another issue for a while but it turned out that it was nothing to do with Travis CI or Linux. After that was fixed I got a green build but the test runner did not run. The problem was the one I talked about before with the NUnit console runner and using a hyphen instead of a forward slash.

NewLine

Travis CI now ran all of FluentMigrators’ tests (except for the integration tests) and had 21 failing tests out of 1254. Most of the failing tests were due to the new line problem. FluentMigrator is (was) littered with \r\n which is the Windows new line character (Carriage Return and Line Feed). In Linux a new line is \n (Line Feed) only. The rather tedious solution to this was to go through the code and replace all of instances of \r\n with Environment.NewLine.

File Paths

A much harder to solve issue is file paths. They are fundamentally different on Windows and Linux. For some tests I could use Path.Combine and Path.DirectorySeparatorChar to get them to pass but others had to be ignored (for now).

And at attempt number twelve I had FluentMigrator being built and all the tests run successfully on Mono and Ubuntu Linux. And there are not a lot of open source .NET projects that can say that.

Next Step

Travis CI has support for MySql, Postgres and Sqlite and it would be fantastic if I could get all the integration tests running against them. Next week maybe.

Hope this helps out some of the other OSS .NET projects.

Git merge commit with no fast forward

When working in open source, it is quite common that I want to either manually merge in a pull request or refer to a Github issue when merging in one of my own branches. If you do a merge and there are no merge conflicts then Git will not create a commit. It does a fast-forward merge as this is the desired behaviour most of the time.

To create a merge commit when merge in a branch, I use this command:

git merge branchname --no-ff -m "Merge pull request #999 from fork/branch"

The –no-ff switch means no fast-forward while merging. There is some Github functionality in the commit message. Using # and the issue or pull request number will create an automatic link to this commit in the issue or pull request.

Github PR
Commit reference as seen in a Github Pull Request

Øredev 2012 – Thursday Afternoon

MadsTorgersen

I had a fantastic morning and the afternoon was equally as good. Here we go.

Katrina Owen – Therapeutic Refactoring

I had to leave a bag at the hotel and ended up having to jog a bit to make this session after lunch and got there just before they closed the door. Lucky for me! The topic of refactoring legacy code is one that lies close to my heart and I totally fell in love with this presentation. I’ve done a presentation of my own at Techdays earlier this year so I know how hard it is to find a good (I mean bad) code sample and then refactor it into readable, maintainable code without losing your audience.

Katrina takes a gnarly method and refactors it into beautiful code in about 30 minutes or so. The code sample is ruby code for creating a filename but the techniques she uses are just as applicable to any statically typed language. Katrina put the code up on Github along with all the steps. Here is the original file and here is what it looks like in the end with tests of course.

What makes this session amazing is just how many techniques she crams into those 30 minutes but still does it in a very relaxed way. She almost makes it look too easy. If you can learn to refactor like this then you’ll be able to take on any legacy codebase. Lovely art on the slides as well. So what are you waiting for, starting watching here.

Sandi Metz – Less, The Path To Better Design

Sandi is another Ruby programmer and someone who knows OO really well. She is the author of Practical Object Oriented Design in Ruby which I haven’t read (yet). Sandi starts by defining design as the art of arranging code. And that when writing code there is always tension between code that has to work today and change in the future. So the purpose of design is to reduce the cost of change. She goes through some possible ways to guide your code design: patterns and principles like SOLID or the Law of Demeter but concludes these are great but too broad to be of practical help at the code level.

So Sandi came up with her own way of diagnosing code which she calls TRUE:

  • Transparent (easy to see consequences of changing it)
  • Reasonable (cost should be proportional to the size of the change)
  • Usable (should be able to reuse it)
  • Exemplary (leads other to change it in the right way)

She then proceeds to go through some code samples and shows different ways to design the code and all the time applying TRUE diagnostics to check how good the solution is. Sandy gets to the core of OO programming as opposed to procedural programming. I think that I know a lot of this but mostly implicitly (I would have hard time explaining it clearly). For example, that the direction of a dependency is a choice and you should choose the direction that will cause the least changes in the future. But that you can’t know this at the start so decouple. I really liked how Sandi could analyse code and judge if it was good enough or if she needed to refactor more. That’s a skill I’d like to be better at.

Here is just some of the advice that I picked up from this session.

  • Classes are more stable when they know less.
  • You cannot avoid dependencies. Depend on things that are more stable than you.
  • You cannot know what is unstable, guard against it, don’t guess. Decouple.
  • Abstractions are more stable

It should be obvious by now that I really liked this session and that another book just got added to my reading list. You can watch the session here.

Mads Torgersen – TypeScript: JavaScript At Scale

Mads started his session with an extremely controversial statement.

“Application scale JavaScript development is hard”

I noticed on Twitter that this statement got absolutely hammered by a bunch of JS developers at the CascadiaJS conference. For me personally I’ve never built a large application in JavaScript so I can’t really judge the validity of this but the concept of converting a dynamic language into a static language seems strange.

However, Mads gave a really solid introduction to TypeScript and shows the advantages of being able to add extra tooling to aid in writing JavaScript. I’m still not totally sold but I have to admit it would be nice to be able to rename a class across several files.

I’d already taken a peek at TypeScript before but I learnt a lot at this session. For starters, TypeScript is not provably type safe. The reason for making JS static typed is for the tooling. All errors are actually warnings that are generated in Visual Studio but you can ignore them and the code will run anyway. For example, if I define a variable as a string but set it to be a number then Visual Studio will flag it as an error but won’t stop me from running the code.

To get this tooling for existing JS libraries a TS definition file has to be created that declares the mapping between JS and TS. And these will have to be hand-rolled as far as I can see. A quick googling produced these examples – AngularJS and AngularJS again. I’m seeing this as a potential problem, which version is the best version? Will these be kept up to date? Or will Microsoft take over this job?

TypeScript will work with Node. There is already a node.d.ts definition file and TypeScript has support for AMD for including other files. TypeScript is open source and on Codeplex so I will be having a look at the source code. The fact that it the typing is optional and that I can have mostly JavaScript and then just sprinkle in some TypeScript where I feel I need it was definitely a plus for me. And even if I did have some quibbles about the generated code (the whole classic class with constructor thing), for the most part it looked very clean. And I don’t think it would be too hard to debug it.

If you want to know more about TypeScript then check out the Playground which is a nice REPL that shows the automatic conversion to JavaScript. And the session is already up if you want to watch it.

Alexander Bard – The Rebels Come Out Online – What if the Internet is something much bigger than we think?

I’m not going to write too much about this and I didn’t take any notes. I just allowed myself to sit back and be entertained. This keynote generated the most discussion afterwards and a ton of memorable quotes. Highly recommended and it would be equally entertaining for non-programmers. This made me think about Twitter and my smart phone in a whole new way.

Summary of Thursday

Me and Daniel got invited to Magnus’ annual meatball dinner along with people he has got to know over the years at Øredev. Gorgeous meatballs and great fun to be able meet some of the speakers in such a relaxed setting. And then after that we went to the evening keynote with Alexander Bard and had some cracking discussions with friends. A day to remember.

Øredev 2012 – Thursday morning

Today was simply fantastic. Every session I went to was really good or brilliant and the keynotes were really good too. I really want to say thank you to my employer Active Solution and my manager Magnus for sending me to Øredev. It is very much appreciated!

Reginald Braithwaite – The Rebellion Imperative

Compared to the keynote with Jim McCarthy on Wednesday, Reginald Braithwaite was much more lowkey (in a very charming way). His first sentence was to apologize for not being a professional speaker as he is just a programmer. But he displayed a lot of natural talent and inserted loads of subtle jokes.

“I’m lisping slightly. Just imagine that there are () around everything I’m saying, it’ll work out”
– Reginald Braithwaite

Reg introduced himself as the child of two socialists and that he loves Cuba. And this led onto his vision of the dark future we live in where corporations don’t care about progress. Wealth breeds inefficiency amongst people and corporations as they have no incentive to change. In fact it’s just the opposite and they build moats to protect their interests e.g. patents.

“All those moments will lost in time, like tears in rain”
– Roy Batty, Bladerunner

is the quote Reg used to illustrate the fate of many of the great innovaters. He showed us a picture of the inventor of Visicalc, Dan Bricklin, and asked how many of us recognized him. No hands went up. Same for Jef Raskin the creator of the Mac computer.

“Your ideas will go further if you don’t demand that you go along with them”
– Reginald Braithwaite

Using ideas from the book Marketing Warfare, Reg presented the four sustainable positions for a company:
– The leader
– the rival
– the innovator/disrupter
– the 99%

The rebels are the 99%. All these startups are trying to be the disrupter but that’s really hard to do and only a handful will succeed. So if you want to be a rebel and build a successful business then go and watch Reg’s keynote!
He finishes the keynote with a dance to jitterbug music, so all in all this keynote is well worth your time.

Fred George – Micro-Service Architecture

This was the first session at Øredev this year where I felt really challenged by a new idea. I’d heard a little bit about micro services last year via Dan North but I haven’t read or heard much about it since. Fred George is a very experienced programmer (IBM and ThoughtWorks) and he used the timeline of his career to show how he evolved from using a layered architecture to micro services. The story starts with a 1 million line J2EE system with a layered architecture now in a pitiful state. Only 70% of the acceptance tests passed and not the same 70% after every run. They measured the amount of unit tests written every week and noticed that only a few programmers were writing all the tests. As an aside, weekly unit test count is really a interesting way to measure progress. The maintenance of the project had been outsourced and it was wallowing in technical debt. So how did it end up this way? Fred’s theory has four reasons for the existence of Technical Debt:

  • Laziness
  • Sloppiness
  • Inexperience
  • No power to refuse

Fred then continued along the timeline of his career through a series of shorter projects or prototypes where he started to move towards the pub/sub model of architecture. This led to Fred and his colleague, Jeff Bay coming up with the Baysian Service Principles (after Jeff Bays).

  • It’s okay to run more than one version of a service at the same time
  • You can only deploy one service at a time

These rules started to change how the team worked. They started deploying 3 times a day.

The next evolution and step in the timeline was a project where they tried out a technique that they called the Pinball Method. The project was to build a system to do batch processing of replacement parts for cars. Processing started with an empty order i.e. the pinball. The order then bounced around the system calling lots of tiny services to fill up the order object with all the information it needed. These services were around 100 lines of code each and did one thing. They did have some problems with this, it was hard to figure out where the order is and hard to understand, especially for inexperienced programmers.

They iterated this architecture more successfully after that, especially at the Forward Internet Group. This resulted in services that were small and disposable. If a change needed to be made to a service then they rewrote them instead of modifying them. The services became self-monitoring and this replaced unit testing. Real-time business monitoring replaced acceptance testing. They used JSON as the message standard for communication between service which meant that the services became language agnostic (Ruby, C++, Clojure, Node). He also mentioned that they used LinkedIn’s pub/sub system Kafka.

All this resulted in them killing off a lot of their agile practises and their technical debt pretty much disappearing. As the system consists of hundreds of small services instead of the usual layers, it is not monolithic but is still complex due having to manage the flow of all of these services. Fred mentioned that he was surprised about how large the impact of this technical change was. It changed the dynamic of the team and the company.

This session really got me thinking. It feels like there are both definite advantages and disadvantages to the micro service approach. Some businesses cannot afford to test in production in this way. The learning curve must be quite steep and would require very competent programmers. Companies like Github do something similar to this on their frontend (but not on their backend however). I’m also wondering how they solve all the potential performance problems. But I’m intrigued and I highly recommend this session. Should be up on Vimeo soon.

Denise R. Jacobs – Scalable and Modular CSS FTW!

Ivar and Daniel at work have both been talking about SMACSS as a better way to structure CSS files. I work on projects that use Twitter Bootstrap or similar grid frameworks and on legacy system with loads of horrible CSS but don’t feel that I really have control over the CSS in the same way as I do over the rest of the code. And Denise did a great introduction into not just SMACSS but also OOCSS, DRY CSS and CSS for Grown Ups. These are all style guides or rules that you can apply to your CSS architecture. Denise did a great job of making her presentation in to a fairytale with herself as a pirate that was helping to restructure the CSS foundation of a castle so that its foundations would be as beautiful as the outside. It might sound a bit strange but she pulled it off. Denise goes through the different style guides and describes both the differences and the similarities between them. This session is full of good tips to improve your CSS, things like writing better selectors, how to group them, naming conventions, layout helpers, leveraging the CSS cascade and how to modularize your CSS. These are tips that you can start applying tomorrow. And best of all, I won the SMACSS book by Jonathon Snook for being brave (or stupid) enough to answer a question from Denise at the end. I think my CSS skills are about to get dramatically better!

Morning Summary

I am going to have split this up in two parts, I just couldn’t stop myself writing about these sessions as they were so good. And the afternoon sessions were brilliant too. So lots more to come in part 2 – the afternoon.

Øredev 2012 – Wednesday

I’m a bit late getting this out (compared to last year) but it’s all good. I am sitting in the Slagthuset building where Øredev is held drinking a lovely cappuccino at the espresso bar here. It was a bit of a slow start this year and while most sessions were pretty good, it wasn’t the whirlwind start of 2011. They has a really cool meme theme here based on funny YouTube clips and I recommend watching some of them (Double Rainbows, the cat version of Grinding the Crack and Ken Lee). So here are the highlights of my first day at Øredev this year.

Iris Classon – Stupid questions and n00bs – top ten intriguing things you need to do

Iris’ session was very interesting and totally amazing for someone who’s been in the branch for less than 2 years. If I was able to travel in time and redo the start of my career, I’d definitely do it more like Iris has done. Anyway, Iris has done a series of stupid questions on her blog and some of them are really not that stupid at all (see here). The idea is to ask stupid questions that other juniors might be afraid to ask. Asking stupid questions is a great habit to cultivate, there’s nothing stupider than sitting in a meeting and not knowing what people are talking about. Iris talked about integrating junior developers into teams and what senior developers can learn from them. An example would be to copy their curiosity and lust for learning new things, something that you might have lost a bit after years of working as a programmer.

The most interesting part of this session was when Iris talked about gender equity and showed statistics that the proportion of women to men in our branch is actually decreasing. This is a fascinating topic for me. I have a 3-year old daughter and notice this stuff much more these days. Iris recommended watching Sapna Cheryan – Signaling Belonging. So I’ll be watching that over the weekend.

Pairing with Lisa Crispin

Angela Harms was supposed to do this session but had to cancel. Luckily Lisa Crispin offered to do it instead. This was a decent session with pictures of Lisas’ donkeys and a bunch of pair programming tips.

Brian Foote – Software in the Age of Sampling

This session was based around the metaphor of music which felt quite appropriate as Brian Foote is famous for his Ball Ball of Mud metaphor. Brian compared different eras and types of music with the different eras of computer programming. The Waterfall era is Frank Sinatra; first a composer wrote the music, then someone arranged and finally Sinatra sung the song. Next came the Agilists who are like the Beatles as they both wrote and played their music. Then came the Turntablists (Rap, Grandmaster Flash etc.) and they could create new music by changing the original music but without changing the source. Brian then mixed in even more metaphors (this session was chock full of them) and tied in the Mosiac browser as an example of a Turntablist project and as a Big Bucket of Glue. Mosiac was just mostly glue code, reusing code that others had already written but adding the ability to view images. And the final music metaphor was the Samplers, like electronic music, where you take small snippets and mix into your code/music and produce something original that way. Unfortunately, Brian had a bit of demo fail when doing his DJ show but a reasonable session anyway. There were a load of funny one-liners e.g. “tasteful, gourmet dumpster diving” to describe how you should work with legacy code. I know he redid it the day after to redo the DJ demo so I don’t know which version will be put up on Vimeo.

Vicent Marti – My Mom told me that git doesn’t scale

Vicent Marti works on the backend for Github (which I love by the way, in case you missed it). I reckon they have some sort of school or university at Github for making slides and practising presentation skills. Vicent (like all of them at Github) could easily find work as a stand-up comedian and his slides were so polished. I laughed the whole way through this despite it being all about the “boring” details of how they build the Github backend. Vicent started by saying the reasons to attend where either because you want to build a Github competitor or because you find this stuff interesting. He did a great job of making it interesting and now I know why the network and graph tabs in Github are slow sometimes and why Github doesn’t use the JVM (it’s too modern, they’re still focussed on using Unix tools as they’re the simplest way to do git stuff).

Alex Papadimoulis – Ugly Code: Beauty is in the Eye of the Beholder

Alex is the editor for the DailyWTF website (Worse Than Failure, apparently) and therefore has tons of ugly code to show. He started with Mumps and ended with the recommendation that if you ever have to work on Mumps code then find another job. In between all this, he should ugly code and code that could be considered ugly or not. His definition of ugly code is interesting; ugly code is code that costs more to maintain. Alex is a slick presenter and got lots of laughs out of the audience with all his samples from DailyWTF. He did give a few tips on how to improve your ugly code, the first being just don’t if you really don’t have to and his other message to us at Øredev was to stop writing clever code. So more funny than practical but well worth a watch.

Closing Keynote with Jim McCarthy

Don’t know what to say about this really. This was a real barnstorming, burn-down-the-barricades speech. Jim McCarthy has worked at Bell Labs, Borland and Microsoft and told the story of how he built the Microsoft Visual C++ team. This then morphed into his vision for the future with thousands of programmers doing great things, an era of magnificence. That we, the programmers can hack the culture of the world and that we hold the real power. He talked a lot about how important a shared vision is if you want your team to be high performing. Maybe even 10 times better than team without it. I got a bit lost at the end when he started talking about his new manifesto(maybe?) the Core Protocols (http://www.mccarthyshow.com/). His preacher style made the whole keynote a bit unsettling and I’m not sure he really managed to capture the audience. I’ll have to research this a bit more before giving an opinion.

Meeting Interesting People

I finally got to meet Kristoffer Ahl from DotnetMentor and one of the few OSS .NET devs in Sweden. He works on FluentSecurity so check that out and send him a pull request. Also met a bunch of former colleagues and had some really deep and involved discussions on programming. A visit to Øredev really triggers a lot of deep thinking that I don’t really have time for during the rest of the year. It gets me thinking a lot about the areas of learning I need to focus on and my core beliefs and values as a programmer. The discussions that triggered this were the best part of Day 1.