Team Topologies - Organizing for fast flow of value

View Original

Deployment pipelines and service abstractions for Stream-aligned teams

by Matthew Skelton

Deployment pipelines can really help Stream-aligned teams to deliver software changes independently. I’ve been using deployment pipelines since 2011 starting with GoCD and then other tools. A few months ago, I joined DevOps experts Helen Beal of Ranger4, and Sam Fell & Anders Wallgren of Electric Cloud to discuss deployment pipelines for modern software delivery as part of the Continuous Discussions (#c9d9) series (episode 88).

(YouTube video segments below)

Key takeaways:

  1. Deployment pipelines can help to reinforce an independent flow of change for a Stream-aligned team. Don’t forget to enable rapid feedback via telemetry!

  2. Define the endpoints external to the team - these represent the “outside world” from team perspective. These external endpoints should be outside or at the domain boundary.

  3. There can be huge value in managing your deployment pipeline “as a Service or as a proper product, with product management approaches.

Transcript (extracts):

Flow and feedback using deployment pipelines and telemetry

[00:29:48]

See this content in the original post

Matthew Skelton:

So the way that I like to think about it and the way that I recommend it for clients is to see the flow from development to operations (the first way) to see that as provided by deployment pipelines. That fundamentally the only way we get that flow is through the deployment pipeline.

The feedback from operations back into development I see as telemetry - logging and metrics and and tracing and these kind of things. Therefore they need to be fundamentally first-class things but also, they need to fundamentally to be wired up to into development. Developers need to be able to see the data that's coming out of production systems. Now, if you need to, for compliance reasons, to mask some of the data - go ahead and mask the data. You know it's 2018, we don't need to roll our own masking and stuff like this - this is out of the box in these tools these days.

So need to make that flow happen, that's how we get the awareness of what's happening in production. Then the continual experimentation in the middle  - we've got to have logging and metrics and telemetry in all of our environments: on my developer laptop, on the test environment, in the kind of whatever it's called UAT, pre-live environment. We have the telemetry in these environments because we want to get the fast feedback. We don't just have a special production environment with special tooling, we have the same tooling everywhere and we interact with these environments in the same way.

We get our fast feedback because we’re deploying to this environment - works fine, fine - this  next environment using our deployment pipeline we get some feedback from the telemetry - “oh something's gone wrong, why is that the case?” I could quickly go and diagnose that problem.  Maybe it's a problem with the deployment pipeline itself - some things misconfigured or it's run out of resources or whatever. That's fine, I can use my tooling, my logging and look at the logs that are coming out of the deployment pipeline itself, that's fine, or maybe it's a problem with the environments or whatever - it is fine. And iterate very quickly - commit a change that addresses  the problem, see it flow through it, eventually it is ready for the flow into production.

I think a lot of organizations that certainly that I see haven't really thought about the flow and the feedback in those terms. This means that deployment pipelines are absolutely fundamental, telemetry is absolutely fundamental and making both of those things happen across all the environments that we want to have in play is absolutely essential and that's what gives us the three ways of DevOps that Gene Kim talks about.

Define the endpoints external to the team

[00:40:39]

See this content in the original post

Matthew Skelton:

So it's useful to consider external endpoints - whatever they are, so particularly if they're fully outside of your organizational boundary. Actually, to be honest, there comes a point where particularly if you follow kind of Conway's law and a nice segmentation of domain boundaries and keep teams kind of isolated in a good way along the right kind of domain boundaries, then even internal endpoints that we call we treat sort of in the same way as external endpoints because they're just another endpoint that's outside of our team domain. 

We've got a concept I've kind of which endpoints we need to interact with, what logically are they but also, obviously, for given environments, where, what's the actual address of the endpoint and all that sort of stuff. Not only have we dealt with the configuration of

our world, but we've got the concept of what we actually depend on as well, so the kind of runtime dependencies, as well as basically the dependency we've pulled in at build time. Keeping those two things as separate concepts is really important, particularly as we start to connect to many many many more things in a kind of micro services, small services type of world.

Deployment pipelines “as a Service” and managed as a Product

[00:49:03]

See this content in the original post

Matthew Skelton:

I think that there's a couple of things that I've seen that can be real enablers. One is this self-service - it's kind of “pipelines as a service”. Ihat needs some greater

maturity in some of the platform teams and the many organizations have, which is we actually need to kind of really treat this platform as a product and need to understand our users, our customers, our development teams, testers, BA’s, software developers and actually treat them like that and reach out and understand what their needs are and listen to what they they're saying they need and responding in appropriate way. 

So, we have a product owner for our platform and its capabilities. That's a step up for a lot of traditional platform or sys-admin teams. The additional part, which I think many organizations miss is giving space for people or even sometimes a whole team, a dedicated team as a kind of enablement for continuous delivery practices to make sure the space for them to actually work with different.. - particularly in large organizations where you've got like 20-50 development teams. 

To enable a group of people who either they come together as a community or a guild or something like this. Or, it's an actual team that actually spends time working with other teams to help them understand new ways of doing things, you know, understand why they should adopt a pattern of semantic versioning for their artifacts, for example, and help them, guide them through why they should actually now move from kind of what was your phrase Helen? ‘Dangerous scripted monster’ . Move from that kind of way of doing things which was okay but it doesn't scale. Move from that to something like, let's say, model driven pipelines instead and actually help them understand why they need to relearn some of this stuff, why they need to throw away some of the scripts because it brings these benefits and maybe give them a kind of a leg, a helping hand to move into a better way of doing stuff. 

A lot of organisations actually seem not to invest time and people and it's like oil in the machine, it's not a huge number of people but it's just enough oil to make things happen. I think that's also quite important, particularly when, as we said we're moving from older ways or more manual ways of doing things into ways of doing things which are much more kind of industrialized and to some extent standardized because as doing a lot of the boilerplate work, helping people understand why that's a good thing to do is really important.


Thanks to Helen Beal, Sam Fell, and Anders Wallgren for a great discussion!

Watch the whole screencast - #c9d9, episode 88:

Best practices for how to model your pipelines, environments and applications for maximun scale and re-usability
See this gallery in the original post