JAX London ‘19 ~ The Highlights (I)

Cloud native wonders, battling tech debt, and how developers can do their bit to help with the climate crisis #jaxlondon @jaxlondon

Matthew Lucas
7 min readOct 27, 2019

During a week of torrential downpours, grid-locked traffic and a jam-packed tube, I spent my days absorbing new ideas at the JAX London conference in Angel Islington. Primarily a Java focused conference, its definite target is towards serious enterprise software development rather than playing with squeaky new toys — although that’s not to say there isn’t room for a few blue-sky moments.

This article is an overview of a few of my personal highlights of the conference. These are my own interpretations, and if you find them interesting I encourage you to check out the original talks, which I have linked wherever possible.

Serverless on your own terms — Knative

(by Mark Chmarny)
I’ve been using Kubernetes as part of my day-job for probably two or more years now. As good a platform as it is, it definitely comes with its fair share of struggles. Rather than being able to focus on what matters, I find myself drowning in k8s plumbing more often than feels necessary.

Serverless platforms — AWS lambda, Google Cloud Functions, etc — can often make life easier by hiding many of the complexities of infrastructure management, complex routing, eventing etc, but at the cost of lock-in, and hard limitations of the hosting platform.

Knative provides a portable “serverless” platform built on top of Kubernetes that lifts the level of abstraction for the developer and codifies hard won best practices that may not always be obvious to the less experienced. To quote the homepage:

Knative solves the “boring but difficult” parts of deploying and managing cloud native services so you don’t have to

There are two core parts to the platform: Serving and Eventing

Serving

Primarily concerned with serving live traffic, it includes features such as:

  • Scaling to zero (save those resources until necessary).
  • Management of code/config revisions, and their smooth deployment into production — canary releases, shadowing etc.
  • Routing and request paths including access control.
  • Metrics, logging and tracing

It’s also possible to plug in other services to easily augment the platform — for example, Google’s StackDriver, or APM platforms such as Datadog.

Eventing

If serving handles the synchronous side of app development, eventing complements with events/messaging.

Knative can be plugged into a ton of different sources — Camel, AWS SQS, GCP PubSub, Kafka — the events themselves adhering to the CloudEvents standard.

The origins are declaratively bound along with the triggers and services that consume them. These can be scaled from just a handful of messages to full on streaming pipelines.

There are a ton more features to discuss around more advanced use cases, but just touching on these points reveals Knative as one to watch over the next year or so — check out the full talk below.

Resolving technical debt in software architecture

(by @cairolali)

We’ve all experienced those death march projects. They began as a beautiful green field initiative but three or four years later, after building up an amount of technical debt worthy of a high interest credit card, it’s time to do battle with a monster! Before long the shouts to ditch and replace roll in , and — if you’re still in business — the whole process begins again.

Welcome to the CRAP cycle:

How do we avoid this fate? First let’s look at how time is spent by the average coder.

How do we spend our time?

If we want to improve our use of time, and the quality of our software, we should invest in making code as easy to read as possible.

How do humans make sense of the world?

And how do we take advantage of this to make our software trivial to understand?

Chunking (and modularity)

Chunking describes the way we are able to link concepts together to form a single “thing”. It’s a cognitive association that allows us to think about a lot of similar, but possibly complex, ideas in one bite. For example, a chess master can much more easily recognise, memorise and recall a familiar game opening when compared with a novice who must remember each individual piece and its position on the board.

We can translate this to software development. We have a number of strategies available to us to chunk concepts in a way that is palatable to other engineers.

High cohesion and loose coupling is a prime example — gather up the pieces that work together and change together; split out the parts that change for different reasons or have orthogonal functionality. The single responsibility principle (SRP) and separation of concerns are all variations on this same great theme. A module should stand strong on its own, and not be coupled to any other that behaves and changes out of step.

Boiling it all down, the point is for a module to reduce the cognitive load on a developer by achieving one common goal, and doing so in such a way that the concepts within stick together easily.

Hierarchies (and layering)

Our ability to understand is made a lot easier when we’re presented with a hierarchy instead of a less obvious graph structure.

Taking advantage of hierarchical layering in your software architecture will make navigation around the code-base much easier for those poor souls that follow. Technically this means driving abstractions to the top — your valuable business logic — and functionally decomposing the specifics into the appropriate lower layers: the database interactions, I/O etc.

Schemata / mental models (and patterns)

Schemata, also known as mental models, are those frameworks that help us make sense of the world. We all know what a ‘teacher’ is, how to interact with them, and what we can reasonably expect from them. We have thousands and thousands of such models that we share — banks, postal service, airports, to name a few. They are instantly recognisable and take little effort to recall and recognise.

And it’s this familiarity that is important for us. We have a whole catalogue of such models at our fingertips in software development: design patterns.

Use a design pattern well, and it can make your code trivial for others to understand / modify. They’ve seen the factory pattern a hundred times before and your use of it will present little new to them.

However, use design patterns badly or in the wrong context, and you could end up having the opposite effect — leading your co-worker down a long and winding path to confusion and frustration.

It’s worth reiterating: 70% of coding is comprehension, write for readability and you’ll reap the rewards down the line.

Rethinking performance in the cloud

(by christhalinger)

From what seemed likely to be a highly technical dive — tweaking and tuning your cloud apps — Chris Thalinger pivoted into what was easily the most inspirational talk of the conference: one about our impact on the environment and how we, as developers, should be conscious of it. During the first week of the extinction rebellion protests in London, this talk was particularly poignant.

Before getting into the core of the topic, there was one sound bite to take away on technical performance:

Know your compiler optimisations, and don’t be too smart!

Or in other words:

  • Write code as it’s supposed to be.
  • Code in patterns that are usual for your language …
  • … and therefore what the compiler expects, so can optimise.
  • Code is for consumption by humans, not compilers.

But now for the important stuff.

On our impact

Today, the data centres that power our online shopping, tweeting and Netflix-viewing consume around 2% of all energy produced globally. This is estimated to double or triple by 2030, and it’s our responsibility to ensure that the energy consumed is well spent or, even better, not consumed at all.

Two core culprits for consumption are computation and networking.

Compute carefully

Many large corporations are making positive steps to tackle their data centre power usage.

Google —are now using 100% renewable energy to drive their data centres. As great as this is, there is only so much green power being produced which may push others onto dirtier energy sources than may otherwise be used. To combat this they are building their own renewable power plants.

Microsoft — enforce a carbon tax on all services they produce. The more energy hungry a product the more the tax will eat into its net profit.

Twitter —switched to running their Scala services on top of GraalVM. Along with an ML driven optimiser for JVM parameters, they saved 18% energy consumption on their core tweet service.

But is it enough — what are the problem areas?

Eco gremlins

Bitcoin — mining is highly destructive in its gigantic use of energy. Coins, that were easy to mine at the beginning on your home PC, became much more difficult and power consuming to obtain as time goes on, to simulate scarcity.

Video games — is an enormous market, and generally a very CPU/GPU intensive task. If manufacturers made even marginal gains on the efficiency of their consoles, the scale over hundreds of millions of users would be enormous.

Network cost

Second to the data centres for power consumption are the networks that route our data. Data reduction and caching could help save on waste here.

Keep your apps light, cache often and use CDNs for both performance and in helping to save the planet.

The takeaway here — performance tuning has a real world impact, and not just for your users. Be mindful of the effect that you’re having.

That’s it for the first few highlights. I’ll add links to the videos when they arrive online — definitely worth a watch!

--

--

Software developer from the UK. Trying my best to make complex ideas simple.