Whale riding Docker in a sea of Microservices


Saw a couple interesting talks on Docker / Microservices last week – “State of the Art in Microservices”, the DockerCon Europe 2014 keynote, by Adrian Cockcroft ; and “Docker in Production – Reality, Not Hype”, at the March-2015 DevOps-Chicago meetup, by Bridget Kromhout (links below).

Adrian’s Microservices talk was interesting in that it was not limited to the purely technical realm of Microservices and Docker, but also described the organizational culture and structure needed to make it work:

  • Breaking Down the SILOs – a traditional “Monolithic Delivery” team must interface with each of 8 autonomous silo groups in his example, often using ticket-driven, bottleneck-prone workflow, vs. having two cross-functional “Microservices” teams (Product Team, Platform Team) which each span formerly-silo’d areas of expertise – making the point that introducing these DevOps-oriented cross-functional teams is a Re-Org
  • Microservice-based production updates may be made independently of other service updates, facilitating continuous delivery by each Microservice team and the reduced-bottleneck, high-throughput that results from it – contrasted with Monolithic Delivery deployments, which works well only with a small number of developers and single language in use
  • Docker containers facilitate the above by isolating configurations for each Microservice in their own containers, which are resource-light and start in seconds (and might live for only minutes), vs. a traditional VM-based approach which is more resource-hungry, starts in minutes and is typically up for weeks
  • Microservice Definition: Loosely coupled service oriented architecture with bounded contexts – this is the most succinct definition I’ve seen,  contrasted with the broader SOA term which can describe either a loosely or tightly coupled (often in the form of RPC-like WSDL / SOAP implementations) – loose coupling is essential for the independent production updates mentioned above, with bounded contexts (how much a service has to know about other services) an indication of loose coupling. A common example of tightly-coupled system is a centralized database schema, with the database being the “contract” between two more more components
  • AWS lambda is an interesting service that scales on demand with 100ms granularity (currently in preview)
  • Example Microservice architectures shown for: Netflix, Twitter, Gilt, Hailo
  • Opportunity identified – of Docker Hub as an enterprise app store for components
  • Book Recommendation – Lean Enterprise: Adopting Continuous Delivery, DevOps and Lean Startup at Scale

Bridget’s talk about how DramaFever uses Docker in production (since late 2013) described some of the benefits of using Docker:

  • Development more consistent – when developers share docker containers for their environment, it both reduces friction during development and eases deployment handoff to shared-dev, QA, staging, production environments. Another side benefit is a production container can be easily and quickly pulled by a developer to a local environment to troubleshoot. In their case they went from a 17min Vagrant-based developer setup (which also differed from production in its configuration) to a < 1min Docker-based one
  • Deployment more repeatable – scaling via provision-on-demand may be done more confidently and in a more automated fashion knowing that the containers are correct. They take the exact container from the QA environment and promote it to Staging then Prod

… and some technical details / challenges:

  • Docker containers in the build pipeline – Docker base images (main app, MySQL emulation of AWS-RDS) built weekly,  and Microservice-specific builds of Docker containers dictated by the Dockerfiles in Git source control – she heavily emphasized the importance of a build-server-driven build and deployment pipeline (Jenkins in their case), the importance of having a fully-automated build and deploy chain (no laptops in the build pipeline)
  • Monitoring beyond the high-level offered by AWS CloudWatch implemented via Graphite, Sentry
  • Fig used to help containers to find each other
  • “Our Own Private Registry” – they found it worked best to run local registries on each workstation rather than a centralized private registry
  • “Getting the Logs out” – host filesystem may be mounted from within the Docker container, to facilitate log export
  • “Containerize all the things” – they use Docker for most things, but have found Chef more effective for some of the infrastructure pieces such as Graphite. As she put it, you need to decide “what do you bake in your images vs. configure on the host after the fact”
  • “About those Race Conditions” – they use the Jenkins “Naginator” plugin to automatically re-run jobs which fail with certain messages such as “Cannot destroy container”

I’m looking forward to leveraging Docker to help optimize the deployment process for my current project, which will become even more important as we move toward a more Microservice-based architecture.


Hire screening bucketology

In the process of interviewing many developers and testers over the years, I’ve found hiring is a very inexact science – using good screening techniques is essential to both optimize the chance of a good hire and minimize time invested.

Below are some of the more effective approaches I’ve used and have seen used as a job-seeker.

Tech Screen

In my experience, it works best for a Tech Screen to be done first for both developers and testers (and for technology managers too) , and it’s far more efficient to do these by phone than FTF for both the interviewers (can cut it off sooner) and the seeker (one less day to wear a suit). Using Skype or other videoconferencing can provide additional early insight into their communication effectiveness.

I’ve found it’s much more important to focus on aptitude than specific skills, because things change fast in the technology biz, and you’re always going to need your people to be pushing the envelope and learning if you want to innovate and consistently deliver value over time. I like to focus on their approach to how they delivered one example “Most Interesting Project” (MIP) (of their choice), including overcoming obstacles, rather than only on the technical aspects of how they used language X or pattern Y – this usually shows whether they had to learn new skills, troubleshoot or think outside the box.

Many recruiters and ads have a tendency to screen based on a specific skill, which I refer to as Buzzword Buzzkill. For example, a Developer role “must have 5+ years C#, using ASP.NET MVC jQuery”, or a Tester role “must have Selenium Webdriver experience”. Examples of more enlightened criteria I’ve used or seen are “5+ years of C# or Java, full-stack web development, using at least one Javascript framework” and “must have experience in developing test automation using one or more compiled or interpreted languages”.

I also usually try to make a preliminary determination during a tech screen of how well the current interests (not just abilities) of the candidate align with the needs of the offered job.

Toward the end of a tech screen I’ve seen CoderPad used, which is a great way to see if they “think in code” to a sufficient degree they can type simple constructs without an IDE or intellisense, and to hear them describe their thought process.

Toward this end, I have found the following “buckets” useful to quickly weed out candidates who are not a good fit. Note my criteria are weighted toward hiring for product-oriented start-ups / SMBs, or small high-performing teams within larger organizations:

  • Corporate Cog: Has worked primarily for larger companies / on larger teams often has to make an adjustment to be effective on a smaller team. Assess whether the person is going to need lots of hand-holding, wants to do only one thing, is not comfortable with perceived risk, doesn’t want to leave their comfort zone of what / how they are accustomed to doing – in general, needs a lot of structure. Example red flags: “I thought you were bigger than this”, asking “do you offer tuition reimbursement or sabbaticals” early in the interview process, or if their past work was mostly driven from an assigned task list vs. working more collaboratively and independently.
  • Lone Ranger: Has worked alone much more than collaboratively with a team. Can occur even on project teams. Example: Someone who has done only narrow contract engagements which didn’t require much interaction or collaboration, or has worked in an isolated manner on teams not following agile principles of frequent review and feedback.
  • New New Thing Groupie: Emphasizes interest-in / experience-with latest / bleeding edge technologies or buzzwords to the exclusion of describing in-depth experience delivering projects of any depth. Example: Someone who repeatedly says they have most liked learning new things / are interested in your job because you’re using cool technology, to the exclusion of talking about what they’ve delivered or interest in what your product does.
  • Tire Kicker: S/he is just not that into you. They are often seeking the highest-paying / cushiest-benefits job they can get, to the exclusion of being interested in your product, how they could contribute to it or how they could be challenged. Example red flags: “I’ve got 2 other offers and expect to get 2 more within the next week”.
  • Hyperbolator: Exaggeration of experience on either resume or during tech screen, e.g., listing experience with some technology when it was used only by someone else on a project, or so shallow as to be irrelevant. Examples: “I listed InstallShield but never personally used it, it was used on a project I was on”, “I’ve never written a web service, only consumed them”.
  • Blabbergaster: Similar to Hyperbolator, but differing in that this individual might actually have the relevant experience, but have focus issues and / or underdeveloped listening skills. For example, if they spend more time explaining why they didn’t do certain things on a takehome test than just having attempted them, or if you picture this person leading a meeting where they leave everything open-ended with no ability to drive to closure. Examples: “While working on the takehome test I wasn’t sure what you were looking for, so I stopped”, or “blah blah blah… what was your question again?”
  • Disgruntled Knowledge Worker: A fair amount of work experience who seems to have never been satisfied at any job (vs. making the most of each opportunity), complains of being “misunderstood” or “underutilized”, might be too prone to blame their not having found that perfect job on outside factors like the economic downturn.
  • Egghead: Has very strong theoretical understanding, but not enough balance with real biz apps and / or app delivery. Example: Candidate who had worked for years at NCSA, had very strong math skills, but working on primarily research projects of 6 months to 2 year duration, no concept of frequent delivery via an SDLC.
  • Syntax Jockey: Understands syntax of a programming language fairly well, but does not demonstrate an a solid approach to clean design. Most often seen with fresh grads but sometimes with experienced candidates who have shown more ability in memorizing syntax rules than analysis, design and application of patterns.
  • No There There: Doesn’t have much depth (at least in areas of interest to the company / job), despite a lot of experience as measured in years or projects – they were in roles on projects where they didn’t do much technically challenging work. Example: Someone with 5 years .NET experience, who had never used a collection construct; a tester who had years of work experience but solely manual testing, no automation or use of a programming language.
  • Take me Mold me: Too inexperienced for the position, who would either require an inordinate amount of training, or does not demonstrate enough initiative to come up to speed quickly. When hiring interns or junior level, the latter is the key criterion.

Soft-skills Screen

Having a different person, such as the hiring manager or an HR recruiter, assess “soft skills” and cultural fit, delve more into what we’re looking for in the position and to what degree it meshes with what they’re looking for – like the Tech Screen, I find this is best done by phone or videoconference. The order of the Tech Screen and Soft-skills Screen may be flipped, I’ve done it both ways.

FTF Interview

During the FTF interview I like to use two interviewers at a time for at least one of the time slots, which allows the interviewers to trade off between talking and thinking (it’s hard to do both well simultaneously, at least for me!) – this drives out how well the candidate deals with multiple streams of interaction as often happens in a work setting, and compresses the interview schedule. I encourage technical interviewers to prepare a short programming exercise for the candidate to do in-person, particularly if no live code evaluation was done during the phone Tech Screen. Two of the key “soft skills” I like to focus on are time management and ability to summarize and analyze information. I also like planning a group lunch with at least some other team members, which helps determine how they would mesh with the team.

Other Resources

  • Laczlo Bock, Google: In Head-Hunting, Big Data May Not Be Such a Big Deal
    • “brainteasers are a complete waste of time” – I’ve found the same, they both artificially inflate and deflate impressions of candidates
    • “Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.”” – as mentioned above in Tech Screen, I find focusing on a project someone did, rather than solely leading with specific skills questions, drives out their analysis and coping skills.