Kayak Winter!

Having just passed the summer solstice, I thought it might be a good time to remind everyone that winter kayaking season is right around the corner.

I recommend planning to go regularly as the weather turns, to ensure you gradually acclimate – to provide inspiration, I’ve put up this photo album from the last two winters I’ve kayaked in Michigan: Kayak Winter bestof 

And here’s a summary of the winter kayak gear I’ve found most important, with specific brands I currently use:

p.s.: Biking is also fun in winter, as shown at Bike Winter photos and Bike Winter gear

KubeCon 2019 insights

Attended KubeCon 2019 in San Diego last week. Great conference, liked the 30min prezo format, got more out of this one than any I’ve attended. And, late November is a great time to be in southern CA, though it’s almost disorienting to see the sun every day.

Key Takeaways

  1. Teams should be structured to minimize cognitive load in order to maximize effectiveness, “minimize cognitive load for others”, “use small, long-lived teams as the standard” – from the Team Topologies book authors
  2. DevEx (Developer Experience) term (new to me) used by the Team Topologies book authors – though this is easily confused with devx.com, I like it. “DevEx must be optimized to maximize feature flow”
  3. CD platform Spinnaker has really taken off with wide adoption and a healthy ecosystem – Spinnaker Summit, now in its third year, is now 500+ attendees over three days, just prior to KubeCon  / CloudNativeCon. Armory offers a hosted Spinnaker solution. A couple of my ex-co-workers have presented at Spinnaker Summit and are active members of the community: Joel Vasallo, Steven Basgall
  4. Serverless Frameworks can mean different things, ranging from as simple as scale-to-zero to functions-as-a-service. Examples of the latter are OpenFAAS (which supports k8s, Swarm, ECS / Fargate) and vendor-specific Lambda, Google Cloud Functions, Azure Functions; of the former, Knative (which is more of a serverless building-blocks technology which appears destined to be commonly-paired with k8s)
  5. Platform as a Product leads to a reliable platform
  6. Observability is largely about being smarter re unanticipated failure events (the “weird zone”) and refining craft in using the monitoring / alerting tooling one already has
  7. Security requires ongoing effort, “treat vulnerabilities like earthquakes” (they’re going to happen, maximize resiliency / recovery)
  8. Kubernetes (k8s) ecosystem is strong and getting stronger, many using it in production, GKE currently more widely used than EKS, but EKS is certainly working for some. “Kubernetes is the OS for the cloud”
  9. Service Mesh delivers the most benefit when most or all applications are on a service-mesh-ready deployment technology (incl. k8s, AWS ECS / Fargate). There are competing service mesh solutions (incl. Istio, Linkerd, AWS-specific-AppMesh, a la carte Envoy, and others), with perhaps no clear leader yet (though Knative may drive Istio adoption). “Service mesh does for service-to-service communication what Kubernetes has done for orchestration”

Favorite talks / workshops

The Elephant in the Kubernetes Room: Team Interactions at Scale – Manuel Pais, Independent (co-author of “Team Topologies”)  ** slides  **  Team Topologies book **  teamtopologies.com  **  How Airbnb Simplified the Kubernetes Workflow for 1000+ Engineers

I’m a third  of the way into the book, best I’ve read since Lean Enterprise. Really well-written, just the right balance of theory / case-study, and the diagrams are well chosen.

  • DevEx == Dev Experience – simplify, simplify, simplify
  • Platform as a Product – leads to a Reliable Platform
  • Cognitive Load – minimize it within any team, and optimize DevEx by minimizing it for Dev team customers (by providing easy-to-use abstractions)
  • Platform Team: Platforms fit for purpose, optimized for DevEx
  • Primary 3 team types: Stream-aligned (feature delivery), Enabling (DevOps embedding / support for stream-aligned), Platform
  • Platform should make it easier to do the right thing, encouraging dev teams to use the platform and not diverge and be on their own
  • Kubernetes should be a hidden impl. detail with the Platform providing abstractions for good DevEx

OpenFaaS Cloud + Linkerd: A Secure, Multi-Tenant Serverless Platform – Charles Pretzer, Buoyant & Alex Ellis, OpenFaaS, LTD  ** slides

  • OpenFAAS is “Serverless 2.0” (Any code, Anywhere), vs. “Serverless 1.0” vendor-specific platforms incl. AWS Lambda, Google Cloud Functions, Azure Functions
  • Anywhere means k8s, Swarm, ECS / EKS, Datacenter, Local
  • Two options: OpenFAAS, OpenFAAS Cloud which bundles Git-based CI / CD, Runtime secrets, OAuth, Linkerd-based Service Mesh
  • Simplicity (short stack.yml + handler.js for a simple node.js example, vs. 6 distinct longer configs for k8s). Note OpenFAAS Dockerfile is optional but is supported
  • Linkerd (bundled with “OpenFAAS Cloud”): Only service mesh currently in CNCF (incubating): Actionable metrics, Deep runtime diagnostics, CLI-debugging, 60sec install, lightweight. Traffic-splitting, mTLS, Dashboard showing routing paths.

No-Nonsense Observability Improvement – Cory Watson, SignalFx ** slides

  • “The Normal Zone” includes monitoring for Anticipated behaviors
  • The “Weird Zone” is about Observability of Unanticipated behaviors
  • Observability will be one of your most expensive projects
  • Incident Measures++ traditional ones incl. MTTD, MTTR can be lame – instead look to Nora Jones Cyclic Approach: Difficulties in Understanding, System-specific failure rates, Surprises, Lack of ownership, Near misses
  • Automation – need to avoid human being “out of the loop”
  • Invest in risk and need
  • Understand the use cases

Making an Internal Kubernetes Offering Generally Available – James Wen, Spotify  **  slides

  • “Take complexity for your developers” (more complex devops tooling, better abstractions can be worth it for a better DevEx)
  • Between the extremes “Complete Team Autonomy” and “Centralized Ops” is their happy medium: Ops (embedded) in teams, Core-Infra Org, Golden Path
  • “Establish trust through monitoring”
  • Metrics incl.: Status of backups
  • “If you don’t have restored backups, you don’t actually have backups”

Doing Things Prometheus Can’t Do with Prometheus – Tim Simmons, DigitalOcean **  slides

  • Metrics need to be Actionable, Contextual
  • Learn existing tools deeply – more valuable than shiny new ‘observability’ tools
  • Jeff Smith: “Maintenance is Revenue Protection”
  • Anomaly detection can be easily done with custom code, don’t always need a product with that feature

How Yelp Moved Security From the App to the Mesh with Envoy and OPA – Daniel Popescu, Yelp & Ben Plotnick, Cruise ** slides

  • OPA (Open Policy Agent) case study
  • OPA incl. unit-testability
  • OPA decision logs published to log collector (Splunk)
  • AuthN, AuthZ via Envoy sidecar
  • For projects like this, start from the use cases, be mindful of scope creep

Design Decisions for Communication Systems – Eric Anderson, Google  **  slides

  • Excellent historical context / pros & cons of Messaging mechanisms incl. gRPC, REST, Unix socket, TCP socket, older ones incl. DCOM / CORBA

Weaveworks EKS AppMesh Gitops workshop

  • Pretty well-constructed workshop targeting EKS & AppMesh with a GitOps workflow, using Flux as a k8s operator to promote container images
  • If you were at the conference, you got a nice Cuttlefish shirt with proof of completing this, pictured at the link above

Hiring bucketology – abridged

improve your screening by defining and using no-go buckets

It’s important during the screening process, whether phone screen or FTF, to identify early on if there is not a fit, to efficiently use both parties’ time. Note my criteria are weighted toward hiring developers and SDETs for product-oriented start-ups / SMBs, or small high-performing teams within larger organizations – while no list of this kind is perfect, my experience has been that these “bucket” traits are highly correlated with less successful individuals / teams in these types of environments.

This is an abridged version of my Hire screening bucketology post from several years ago, focused only on the bucket categories, with a few additions / updates to the “buckets”:

  • Corporate Cog: Has worked primarily for larger companies / on larger teams, it’s often a difficult adjustment to be effective on a smaller team. Assess whether the person is going to need lots of hand-holding, wants to do only one thing, is not comfortable with perceived risk, doesn’t want to leave their comfort zone of what / how they are accustomed to working – in general, needs a lot of structure. Example red flags: “I thought the company was bigger than this”, asking “do you offer tuition reimbursement or sabbaticals” early in the interview process, or if their past work was mostly driven from an assigned task list vs. working more collaboratively and independently.
  • Lone Ranger: Has worked alone much more than collaboratively with a team.  Example: Someone who has done only narrow contract engagements which didn’t require much interaction or collaboration, or has worked in an isolated manner on teams not following agile principles of frequent review and feedback.
  • New New Thing Groupie: Emphasizes interest-in / experience-with latest / bleeding edge technologies or buzzwords to the exclusion of describing in-depth experience delivering projects of any depth. Example: Someone who repeatedly says they have most liked learning new things / are interested in your job because you’re using cool technology, to the exclusion of talking about what they’ve delivered or interest in what your product does.
  • Culture Coveter: Someone who leads with saying they are interested in your company because the “culture” is so much better / modern than where they are, the Glassdoor rating is so rosy, etc., while saying relatively little about what interests them about the company or how they would contribute. Note the latter point is in common with the New New Thing Groupie. Sometimes seen in a candidate afflicted with “Corporate Cog” who thinks they want the improved “culture” of a startup / SMB but may not be ready for the everyday chaos, pressure and often greater accountability that comes with it.
  • Tire Kicker: S/he is just not that into you. They are often seeking the highest-paying / cushiest-benefits job they can get, to the exclusion of being interested in your product, how they could contribute to it or how they could be challenged. Example red flags: “I’ve got 2 other offers and expect to get 2 more within the next week”.
  • Hyperbolator: Exaggeration of experience either on resume or during tech screen, e.g., listing experience with some technology when it was used only by someone else on a project, or so shallow as to be irrelevant. Examples: “I listed InstallShield but never personally used it, it was used on a project I was on”, “I’ve never written a web service, only consumed them”.
  • Blabbergaster: Similar to Hyperbolator, but differing in that this individual might actually have the relevant experience, but has focus issues and / or under-developed listening skills. For example, if they spend more time explaining why they didn’t do certain things on a take-home test than just having attempted them, or if you picture this person leading a meeting where they leave everything open-ended with no ability to drive to closure. Examples: “While working on the take-home test I wasn’t sure what you were looking for, so I stopped”, or “blah blah blah… what was your question again?”
  • Dilletante:: Differs from Hyperbolator in having actually used various tools and technologies, but has not yet demonstrated deep competence in any area, which is critical to assess aptitude. “Mile wide and inch deep”, think of this as a wide but very short breed of a T-shaped person.
  • Disgruntled Knowledge Worker: Someone with a fair amount of work experience who seems to have never been satisfied at any job (vs. making the most of each opportunity), complains of being “misunderstood” or “underutilized”, might be too prone to blame their not having found that perfect job on outside factors like the economic downturn.
  • Egghead: Has very strong theoretical understanding, but not enough balance with real biz apps and / or app delivery. Example: Candidate who had worked for years at NCSA, had very strong math skills, but worked primarily on research projects of 6 months to 2 year duration, no concept of frequent delivery via an SDLC.
  • Syntax Jockey: Understands syntax of a programming language fairly well, but does not demonstrate a solid approach to clean design, or what is really happening “under the covers”. Most often seen with fresh grads but sometimes with experienced candidates who have shown more ability to memorize syntax rules than analysis, design and application of patterns. Another symptom of this can be over-reliance on wizards / generators / decorator-frameworks, without understanding what the generated code is doing (or being able to troubleshoot it). Similar to Dilettante but narrower.
  • No There There: Doesn’t have much depth (at least in the areas of interest to the company / job), despite a lot of experience as measured in years or projects – they were in roles on projects where they didn’t do much technically challenging work. Example: Someone with 5 years .NET experience, who had never used a collection construct; a tester who had years of work experience but solely manual testing, no use of tools or programming to at least partially automate. Similar to Syntax Jockey but narrower.
  • Take me Mold me: Too inexperienced for the position, would either require an inordinate amount of training, and / or does not show enough initiative to come up to speed quickly. When hiring interns or junior level, the latter is the key criterion.

 

Process Pattern: Blocks and Blocked Swimlanes

Identifying and mitigating blockers is one of the most important practices in keeping projects on-schedule and maximizing throughput.

I’ve observed three different practices used for JIRA projects in the past to track blocked work:

  1. Flag – this provides a strong visual indication on the board, but requires the flag be manually added / removed
  2. Blocked workflow state – clear board visibility (mapped to column), but requires the state be manually entered / exited
  3. Priority field set to Blocker – clear board visibility (same as for any priority field value, may optionally be swimlane’d), but requires field be manually set to Blocker / back to other priority

The approach I like to use is to rely on the JIRA Blocks / Is-blocked-by link type, to show both blocked and blocking tickets, each in their own swimlane at the top of the board. “Blocking” tickets are important to track when your project is blocking another.

These filters require the popular Adaptavist ScriptRunner plugin be installed on the JIRA server.

Example JQL Filters for Blocked and Blocks

JQL Filters

The intent of the generic JQL filters described here is to support adding “Blocks” and “Blocking” swimlanes to any JIRA project’s board.

AllProj-BlockedDirect: Returns all open tickets having a direct “is blocked by” link to at least one open ticket

issueFunction in linkedIssuesOf(“hasLinks=’blocks’ AND resolution is empty “, “blocks”) AND resolution is empty

AllProj-BlockedIndirectParent: Returns all open tickets having an indirect (parent of blocked sub-task)  “is blocked by” link to at least one open ticket

issueFunction in parentsOf(“filter=AllProj-BlockedDirect”) AND resolution is empty

AllProj-BlockedndirectSubtask: Returns all open tickets having an indirect (sub-task of blocked-parent) “is blocked by” link to at least one open ticket

issueFunction in subtasksOf(“filter=AllProj-BlockedDirect”) ) AND resolution is empty

AllProj-BlocksDirect: Returns all open tickets having a direct “blocks” link to at least one open ticket

issueFunction in linkedIssuesOf(“hasLinks=’is blocked by’ AND resolution is empty”, “is blocked by”) AND resolution is empty

Add Blocked and Blocks Swimlanes to board

The following JIRA board swimlanes are defined as:

  • Blocks: Show all open project tickets directly blocking open ticket(s) on any other project
  • Blocked: Show all open project tickets directly or indirectly (parent-of or sub-task-of) blocked by open ticket(s) on any other open project

Screen Shot 2018-12-05 at 6.43.58 AM

 

Notes

  • To run any of the above JIRA filters standalone for a project (outside the board), simply add a “project=” clause and save them to a project-specific filter name
  • Be sure to set the saved JQL Filters permissions to “Everyone” or another permission level giving everyone access who needs it
  • It sometimes makes sense to omit ‘AllProj-BlockedndirectSubtask’ from the above ‘Blocked’ board filter, to avoid showing too many sub-tasks as blocked
  • With Scriptrunner Cloud, it is sometimes necessary to do the following for a change to be picked up by the filter and by the swimlane, for each of the 4 filters:
    • Refresh the query in the scriptrunner query editor
    • Select “Sync Filter”
    • For a Jira board already viewed in a browser: Select “board settings then “back to board”

 

 

 

 

Process Pattern: Sub-task lexicon

One of the challenges I often encounter with JIRA boards is the limited screen real estate available for the ticket summary – often, the visible first few words are not enough to know what the ticket is without clicking on it for detail.

When using sub-tasks, this is often compounded by a sub-task summary which is redundant with the parent ticket, further reducing the effectiveness of the board.

To help improve usability of the board for sub-tasks,  I’ve had good success using the “sub-task lexicon” process pattern on a number of projects, described below.

Example Sub-task lexicon prefixes

The following table of example prefixes presumes the following simple workflow states: Backlog -> Selected -> In-Progress -> QA -> Done

A smallish number of prefixes works best to keep it simple and consistent over time – when in DOubt use “DO”:

Prefix Description Typ. Workflow States Examples Comment
DO  a taskish sub-task  Selected -> In-Progress -> Done
  • DO Test Env. config.
Do a non-code/qa/spec activity

Often requires no AC or QA verify

FIX  a bug  Selected -> In-Progress -> QA
  • FIX Error when saving
  • Code the fix
  • Unit-test the fix
IMPL  a story Selected -> In-Progress -> QA
  • IMPL User REST resource
  • Code the story
  • Unit-test the story
QA
  • E Test
  • Verify
QA-> Done
  • QA Error when saving
  • QA User REST resource
  • “E Test” = Exploratory Test
  • Verify ensures AC are met
SPEC Description detail, AC Backlog -> Selected
  • SPEC  User REST resource

AC = Accept Criteria

Benefits

  • Improved readability on the board
  • Standardized prefixes encourage consistency
  • Encourages team thinking in a routinized way about typical sub-task workflow “profiles like:
    • Feature: SPEC -> IMPL -> QA
    • Bug:         FIX -> QA

Other notes

  • Some teams use sub-tasks for Code Review (CR), others rely on external tools’ code review workflow such as GitHub / GitLab
  • Some teams create the typical sub-tasks up-front (typical of scrum’s sprint-planning), others create them JIT (works well with kanban or scrum-ban)

 

Dockerize SQL Server using Windows Server 2016 Containers

how to build a SQL Server 2014 Express image from a single “docker build” command

I lead an automation group where we need to easily spin up and down multiple locally-administered database instances, in support of automated testing of web and other applications. Some applications use MySQL, PostgreSQL or MongoDB, which are supported as Docker Linux Containers. For projects using Windows-based SQL Server, there had been no Container option until the advent of Windows Server 2016, now available in preview (TP5).

To dive immediately into how to create a Dockerized SQL Server, see: Windows Server 2016 Support for Docker Containers.

Automation Server Scenario: SQL Server Database Instance private to each test run

Capture

The simplified environment diagram above shows how containers are envisioned to be used in the automated build / test environment:

  • Docker images are CI-built when source code and / or dockerfile changes (SQL Server image to be rebuilt at most once / week, mainly to pull latest OS and SQL Server updates to image)
  • Docker containers are spun up on-demand on a pool of generic VMs – nothing to install on the generic VMs except docker (and build server agent) – no dependency on external sql server instance
  • Each test container instance runs against its own dedicated SQL Server container instance, each are torn down at the end of the test

Another scenario – making private SQL Server instances more readily available to Mac developers:

Mac Developer Scenario: SQL Server Local Instance easily installed

Mac Developers, when working on a project using SQL Server, should have access to their own instance of SQL Server. The SQL Server Express Docker image makes that easier – they need only create a Windows Container Host VM (Windows 2016 TP5 preview) using VirtualBox as described here.

Windows Server 2016 Support for Docker Containers

Windows Server 2016 (available now in a preview edition, TP5) introduces support for Windows Containers, with Docker-compatible command support.

Docker Hub currently has Windows Container images for MongoDB, MySQL and SQLite databases (as well as Redis) – but oddly, not for SQL Server.

See below for info on how to build a SQL Server 2014 Express image from a single “docker build” command. It configures SQL Server Express to listen on static port 1433 and supports a runtime choice of data / backups stored internal-to-container or on a host volume. This was created and tested using Windows Server 2016 TP4 Core.

These steps assume you have already established a Windows Container Host VM (with at least 8G free space) – if not see Creating a Windows Container Host VM.

Building the SQL Server Docker image

If you have git installed on the host VM, you can clone the repo https://github.com/bucrogers/Dockerfiles-for-windows.git

Otherwise, pull the files as follows – from powershell:
wget https://github.com/bucrogers/Dockerfiles-for-windows/archive/master.zip -outfile master.zip

expand-archive master.zip
cd master\dockerfiles-for-windows-master\sqlexpress
Build an image named ‘sqlexpress’ using Docker – this will take up to an hour, longer the first time (the Docker Hub base image gets cached in the local Docker repo) – then list the images:
docker build -t sqlexpress .
docker images

Running a SQL Server Docker Container

The firewall on the host VM must be opened for each port to be exposed from a running container, to be accessed outside the host VM – for a single instance of the SQL Server container, open port 1433 – from powershell:
if (!(Get-NetFirewallRule | where {$_.Name -eq "SQLServer 1433"})) { New-NetFirewallRule -Name "SQL Server 1433" -DisplayName "SQL Server 1433" -Protocol tcp -LocalPort 1433 -Action Allow -Enabled True}
Run a single instance of a SQL Express container, in Docker “interactive” mode, with data hosted inside the container – you’ll end up at a powershell inside the container – when you exit powershell the container will stop:
docker run -it -p 1433:1433 sqlexpress "powershell ./start"
Connect using SQL Server Management Studio (I ran this from Windows 10)  – you’ll need the hostname / IP address of the Windows Container Host VM (via “ipconfig”). The sa password is ‘thepassword2#’.

Run two SQL Express containers, this time in “detached” mode, with data volumes on the Host VM, first removing the prior container – from powershell:
..\stop-rm-all
md \sql
md \sql2
docker run --name sql -d -p 1433:1433 -v c:\sql:c:\sql sqlexpress
docker run --name sql2 -d -p 1434:1433 -v c:\sql2:c:\sql sqlexpress
You can now connect to either of these from SSMS on their respective ports – when the containers are stopped and removed, the data and any backups remain on the Host VM, to be automatically reused on a future container run.

SQL Server dockerfile details

The dockerfile full source is here. It facilitates the following:

  • Configuration of SQL Server Express  to listen on static port 1433
  • Installation from SQL Server 2014 Express download link
  • -or- Provided SQL Server edition setup.exe
  • Run with data / backups inside the container (not persisted)
  • -or- Data / backups on host VM (persisted)
  • Bootstrap first-time data initialization for persisted mode
  • Running container either detached or interactive

Below is a walk-through of the key statements in the dockerfile:

Use a base image from Docker Hub including .NET Framework 3.5, since SQL Server requires it –  the first time this is run, it will download to the local Docker registry running on the host, after which it is pulled from the local registry:
FROM docker.io/microsoft/dotnet35:windowsservercore
Declare Docker environment variables for data folder usage etc. – these may be overriden at container-runtime (though that is not useful for this image):
ENV sqlinstance SQL
ENV sql c:\\sql
ENV sqldata c:\\sql\\data
ENV sqlbackup c:\\sql\\backup
Copy files from host into container for use at build time:
COPY . /install
Download SQL Server 2014 Express installer from link and extract files to expected setup folder (this link may be customized for any other edition of SQL Server, or the dockerfile may be modified to copy existing setup\* files from the host) :
RUN powershell invoke-webrequest http://download.microsoft.com/download/E/A/E/EAE6F7FC-767A-4038-A954-49B8B05D04EB/Express%2064BIT/SQLEXPR_x64_ENU.exe" \
-OutFile sqlexpr_x64_enu.exe

RUN /install/sqlexpr_x64_enu.exe /q /x:/install/setup
Run SQL Server installer, specifying folders, sa password, TCP protocol enabled – special note: the SYSTEM account must be used for the SQL Server Engine service as of Windows 2016 TP4 (the default “Network Service” account does not work):
RUN /install/setup/setup.exe /q /ACTION=Install /INSTANCENAME=%sqlinstance% /FEATURES=SQLEngine /UPDATEENABLED=1 \
/SECURITYMODE=SQL /SAPWD="thepassword2#" /SQLSVCACCOUNT="NT AUTHORITY\System" /SQLSYSADMINACCOUNTS="BUILTIN\ADMINISTRATORS" \
/INSTALLSQLDATADIR=%sqldata% /SQLUSERDBLOGDIR=%sqldata% /SQLBACKUPDIR=%sqlbackup% \
/TCPENABLED=1 /NPENABLED=0 /IACCEPTSQLSERVERLICENSETERMS

Change TCP protocol configuration to listen on static port 1433, since SQL Server Express defaults to a dynamic port:
RUN powershell ./Set-SqlExpressStaticTcpPort %sqlinstance%
Remove the data and backup directories, in order to facilitate running this image with data and / or backups hosted either internal-to-container or on an external volume.

This is necessary since Windows Docker, as of TP4, does not support mapping to an existing container folder. But even when that support is added, it may be preferable to leave this as-is since this approach supports a first-time automatic “bootstrap” copy of data files to an external host volume:
RUN powershell ./Move-dirs-and-stop-service %sqlinstance% %sql% %sqldata% %sqlbackup%

Declare a default CMD, which simplifies invoking “docker run” later – this first invokes start.ps1 to setup files appropriately for an internal or external data volume, then runs a detached-friendly sleep loop:
CMD powershell ./start detached %sqlinstance% %sqldata% %sqlbackup%

Creating a Windows Container Host VM on Azure

See here for an example Azure Resource Template to create a Container-ready Windows Server 2016 TP5 VM on Azure.

Creating a Windows Container Host VM on Hyper-V

The following instructions are adapted from Deploy a Windows Container Host to a New Hyper-V Virtual Machine:

Prerequisites

You may run a Windows Container Host VM on any of the following (I performed my testing on both Windows 10 and Azure):

Steps

Open Command Prompt (Admin), then start powershell:
powershell
Determine if a VM Switch is configured – if nothing returned below, create a new *external* VM switch then proceed:
Get-VMSwitch | where {$_.SwitchType –eq “External”}
Pull the create container host script

Important note: modify this before running to increase disk size from 8G to 20G by searching for “-SizeBytes 8GB”>
wget -uri https://aka.ms/tp5/New-ContainerHost -OutFile c:\New-ContainerHost.ps1
Run the script to create the Container Host VM – this will take up to 1.5 hrs. Note there is an optional “-Hyperv” argument which will work only if running on Windows 2016 TP4 installed on a physical box – this is necessary only to evaluate “Hyper-V Containers”, not for “Windows Containers”:
c:\New-ContainerHost.ps1 –VmName <yourVmName> -WindowsImage ServerDatacenterCore

If you get a “running scripts is disabled” error, then execute the following then try again:
Set-ExecutionPolicy RemoteSigned

References

Whale riding Docker in a sea of Microservices

make development more consistent and deployment more reliable

docker

Saw a couple interesting talks on Docker / Microservices last week – “State of the Art in Microservices”, the DockerCon Europe 2014 keynote, by Adrian Cockcroft ; and “Docker in Production – Reality, Not Hype”, at the March-2015 DevOps-Chicago meetup, by Bridget Kromhout (links below).

Adrian’s Microservices talk was interesting in that it was not limited to the purely technical realm of Microservices and Docker, but also described the organizational culture and structure needed to make it work:

  • Breaking Down the SILOs – a traditional “Monolithic Delivery” team must interface with each of 8 autonomous silo groups in his example, often using ticket-driven, bottleneck-prone workflow, vs. having two cross-functional “Microservices” teams (Product Team, Platform Team) which each span formerly-silo’d areas of expertise – making the point that introducing these DevOps-oriented cross-functional teams is a Re-Org
  • Microservice-based production updates may be made independently of other service updates, facilitating continuous delivery by each Microservice team and the reduced-bottleneck, high-throughput that results from it – contrasted with Monolithic Delivery deployments, which work well only with a small number of developers and single language in use
  • Docker containers facilitate the above by isolating configurations for each Microservice in their own containers, which are resource-light and start in seconds (and might live for only minutes), vs. a traditional VM-based approach which is more resource-hungry, starts in minutes and is typically up for weeks
  • Microservice Definition: Loosely coupled service oriented architecture with bounded contexts – this is the most succinct definition I’ve seen,  contrasted with the broader SOA term which can describe either a loosely or tightly coupled (often in the form of RPC-like WSDL / SOAP implementations) – loose coupling is essential for the independent production updates mentioned above, with bounded contexts (how much a service has to know about other services) an indication of loose coupling. A common example of tightly-coupled system is a centralized database schema, with the database being the “contract” between two or more more components
  • AWS lambda is an interesting service that scales on demand with 100ms granularity (currently in preview)
  • Example Microservice architectures shown for: Netflix, Twitter, Gilt, Hailo
  • Opportunity identified – of Docker Hub as an enterprise app store for components
  • Book Recommendation – Lean Enterprise: Adopting Continuous Delivery, DevOps and Lean Startup at Scale

Bridget’s talk about how DramaFever uses Docker in production (since late 2013) described some of the benefits of using Docker:

  • Development more consistent – when developers share docker containers for their environment, it both reduces friction during development and eases deployment handoff to shared-dev, QA, staging, production environments. Another side benefit is a production container can be easily and quickly pulled by a developer to a local environment to troubleshoot. In their case they went from a 17min Vagrant-based developer setup (which also differed from production in its configuration) to a < 1min Docker-based one
  • Deployment more repeatable – scaling via provision-on-demand may be done more confidently and in a more automated fashion knowing that the containers are correct. They take the exact image from the QA environment and promote it to Staging then Prod

… and some technical details / challenges:

  • Docker containers in the build pipeline – Docker base images (main app, MySQL emulation of AWS-RDS) built weekly,  and Microservice-specific builds of Docker containers dictated by the Dockerfiles in Git source control – she heavily emphasized the importance of a build-server-driven build and deployment pipeline (Jenkins in their case), the importance of having a fully-automated build and deploy chain (no laptops in the build pipeline)
  • Monitoring beyond the high-level offered by AWS CloudWatch implemented via Graphite, Sentry
  • Fig (now named “compose”) used to help containers find each other
  • “Our Own Private Registry” – they found it worked best to run local registries on each workstation rather than a centralized private registry
  • “Getting the Logs out” – host filesystem may be mounted from within the Docker container, to facilitate log export
  • “Containerize all the things” – they use Docker for most things, but have found Chef more effective for some of the infrastructure pieces such as Graphite. As she put it, you need to decide “what do you bake in your images vs. configure on the host after the fact”
  • “About those Race Conditions” – they use the Jenkins “Naginator” plugin to automatically re-run jobs which fail with certain messages such as “Cannot destroy container”

I’m looking forward to leveraging Docker to help optimize the deployment process for my current project, which will become even more important as we move toward a more Microservice-based architecture.

References:

Hire screening bucketology

improve your screening by defining and using no-go buckets

I’ve found hiring is a very inexact science – using good screening techniques is essential to both optimize the chance of a good hire and minimize time invested.

Below are some of the more effective approaches I’ve used and have seen used as a job-seeker.

Tech Screen

In my experience, it works best for a Tech Screen to be done first for both developers and testers (and for technology managers too) , and it’s far more efficient to do these by phone than FTF for both the interviewers (can cut it off sooner) and the seeker (one less day to wear a suit). Using Skype or other videoconferencing can provide additional early insight into their communication effectiveness.

I’ve found it’s much more important to focus on aptitude than specific skills, because things change fast in the technology biz, and you’re always going to need your people to be pushing the envelope and learning if you want to innovate and consistently deliver value over time. I like to focus on their approach to how they delivered one example “Most Interesting Project” (MIP) (of their choice), including overcoming obstacles, rather than only on the technical aspects of how they used language X or pattern Y – this usually shows whether they had to learn new skills, troubleshoot or think outside the box.

Many recruiters and ads have a tendency to screen based on a specific skill, which I refer to as Buzzword Buzzkill. For example, a Developer role “must have 5+ years C#, using ASP.NET MVC jQuery”, or a Tester role “must have Selenium Webdriver experience”. Examples of more enlightened criteria I’ve used or seen are “5+ years of C# or Java, full-stack web development, using at least one Javascript framework” and “must have experience in developing test automation using one or more compiled or interpreted languages”.

I also usually try to make a preliminary determination during a tech screen of how well the current interests (not just abilities) of the candidate align with the needs of the offered job.

Toward the end of a tech screen I’ve seen CoderPad used, which is a great way to see if they “think in code” to a sufficient degree they can type simple constructs without an IDE or intellisense, and to hear them describe their thought process.

Toward this end, I have found the following “buckets” useful to quickly weed out candidates who are not a good fit. Note my criteria are weighted toward hiring for product-oriented start-ups / SMBs, or small high-performing teams within larger organizations:

  • Corporate Cog: Has worked primarily for larger companies / on larger teams often has to make an adjustment to be effective on a smaller team. Assess whether the person is going to need lots of hand-holding, wants to do only one thing, is not comfortable with perceived risk, doesn’t want to leave their comfort zone of what / how they are accustomed to doing – in general, needs a lot of structure. Example red flags: “I thought you were bigger than this”, asking “do you offer tuition reimbursement or sabbaticals” early in the interview process, or if their past work was mostly driven from an assigned task list vs. working more collaboratively and independently.
  • Lone Ranger: Has worked alone much more than collaboratively with a team. Can occur even on project teams. Example: Someone who has done only narrow contract engagements which didn’t require much interaction or collaboration, or has worked in an isolated manner on teams not following agile principles of frequent review and feedback.
  • New New Thing Groupie: Emphasizes interest-in / experience-with latest / bleeding edge technologies or buzzwords to the exclusion of describing in-depth experience delivering projects of any depth. Example: Someone who repeatedly says they have most liked learning new things / are interested in your job because you’re using cool technology, to the exclusion of talking about what they’ve delivered or interest in what your product does.
  • Tire Kicker: S/he is just not that into you. They are often seeking the highest-paying / cushiest-benefits job they can get, to the exclusion of being interested in your product, how they could contribute to it or how they could be challenged. Example red flags: “I’ve got 2 other offers and expect to get 2 more within the next week”.
  • Hyperbolator: Exaggeration of experience on either resume or during tech screen, e.g., listing experience with some technology when it was used only by someone else on a project, or so shallow as to be irrelevant. Examples: “I listed InstallShield but never personally used it, it was used on a project I was on”, “I’ve never written a web service, only consumed them”.
  • Blabbergaster: Similar to Hyperbolator, but differing in that this individual might actually have the relevant experience, but have focus issues and / or underdeveloped listening skills. For example, if they spend more time explaining why they didn’t do certain things on a takehome test than just having attempted them, or if you picture this person leading a meeting where they leave everything open-ended with no ability to drive to closure. Examples: “While working on the takehome test I wasn’t sure what you were looking for, so I stopped”, or “blah blah blah… what was your question again?”
  • Disgruntled Knowledge Worker: A fair amount of work experience who seems to have never been satisfied at any job (vs. making the most of each opportunity), complains of being “misunderstood” or “underutilized”, might be too prone to blame their not having found that perfect job on outside factors like the economic downturn.
  • Egghead: Has very strong theoretical understanding, but not enough balance with real biz apps and / or app delivery. Example: Candidate who had worked for years at NCSA, had very strong math skills, but working on primarily research projects of 6 months to 2 year duration, no concept of frequent delivery via an SDLC.
  • Syntax Jockey: Understands syntax of a programming language fairly well, but does not demonstrate a solid approach to clean design. Most often seen with fresh grads but sometimes with experienced candidates who have shown more ability in memorizing syntax rules than analysis, design and application of patterns.
  • No There There: Doesn’t have much depth (at least in areas of interest to the company / job), despite a lot of experience as measured in years or projects – they were in roles on projects where they didn’t do much technically challenging work. Example: Someone with 5 years .NET experience, who had never used a collection construct; a tester who had years of work experience but solely manual testing, no automation or use of a programming language.
  • Take me Mold me: Too inexperienced for the position, who would either require an inordinate amount of training, or does not show enough initiative to come up to speed quickly. When hiring interns or junior level, the latter is the key criterion.

Soft-skills Screen

Having a different person, such as the hiring manager or an HR recruiter, assess “soft skills” and cultural fit, delve more into what we’re looking for in the position and to what degree it meshes with what they’re looking for – like the Tech Screen, I find this is best done by phone or videoconference. The order of the Tech Screen and Soft-skills Screen may be flipped, I’ve done it both ways.

FTF Interview

During the FTF interview I like to use two interviewers at a time for at least one of the time slots, which allows the interviewers to trade off between talking and thinking (it’s hard to do both well simultaneously, at least for me!) – this drives out how well the candidate deals with multiple streams of interaction as often happens in a work setting, and compresses the interview schedule. I encourage technical interviewers to prepare a short programming exercise for the candidate to do in-person, particularly if no live code evaluation was done during the phone Tech Screen. Two of the key “soft skills” I like to focus on are time management and ability to summarize and analyze information. I also like planning a group lunch with at least some other team members, which helps determine how they would mesh with the team.

Other Resources

  • Laczlo Bock, Google: In Head-Hunting, Big Data May Not Be Such a Big Deal
    • “brainteasers are a complete waste of time” – I’ve found the same, they both artificially inflate and deflate impressions of candidates
    • “Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.”” – as mentioned above in Tech Screen, I find focusing on a project someone did, rather than solely leading with specific skills questions, drives out their analysis and coping skills.