Engineering management books

If you’re like me (first time manager in a fast growing software company) you’re likely facing a vast number of organizational issues that you never faced before and they are coming at you faster than you can learn how to deal with them.

Thankfully there are smarter, more experienced people out there that figured out a lot of topics that you can copy, claim credit and become the leader that you always dreamed out to be and your company desperately needs!

My list of books I wish I knew about before scaling Gorgias

If it’s not immediately obvious, the book selection and the order below is opinionated. I’m starting with the leadership and culture books first, then going into management fundamentals, engineering manager career orientation, hiring engineers, operational best practices, writing strategy and finally scaling teams and productivity.

First fundamentals, then tactics.

Turn The Ship Around!

Turn The Ship Around!: A True Story of Building Leaders by Breaking the Rules
“Turn The Ship Around!: A True Story of Building Leaders by Breaking the Rules” by David Marquet
My #1 recommended professional book. Manager or not, if you want to figure out what is a great leader I recommended reading it!
My personal take on the whole leader-follower dichotomy is that it’s abused in our industry. Being a follower: bad! Being a leader: good!
Great leaders figured out when to follow and empower people in your team and when to lead. The art of moving between these modes is what separates noobs from experienced and effective leaders. This is how trust is built and how you encourage growth, ownership and motivation in people.
IMO the best part about this book is that’s it’s not boring! If you read enough management and self-help books you know what I’m talking about.

No Rules Rules

No Rules Rules: Netflix and the Culture of Reinvention
“No Rules Rules: Netflix and the Culture of Reinvention” by Erin Meyer and Reed Hastings
It’s about the (in)famous Netflix culture. The core idea is that an exceptional company increases talent density and establishes a culture of freedom and responsibility.
Allow me to put it less diplomatically in steps:
  • First you aggressively fire anyone that is not a “top performer”.
  • Then you pay top of the market salaries to those who are left and recruit the “best” people.
  • Remove redundant bureaucracy because you hired the “best” and they hate stupid rules that don’t bring any value.
  • Make sure that they know they own their shit, they are expected to take risks and will get fired if they don’t get results on time.

The book is full of anecdotes of the CEO and interviews from the various employees that make the case for the above.
My recommendation is to take the ideas here with a giant grain of salt, it’s likely not something that applies to your seed-stage B2B SaaS startup where cash is in limited supply and you have commission based roles.

Why do I recommend this book then?
Because I think it helps to think about how you should treat product, engineering and design roles. Why it’s worth paying the top dollar to get the best possible engineers. Why your designers should not have to jump through ridiculous hoops to get their best work done and why your product people should be empowered to take risky but calculated decisions.
I think it applies to marketing and other “scalable” roles, but that’s not my place to comment.

The Manager’s Path

The Manager's Path: A Guide for Tech Leaders Navigating Growth and Change
“The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change” by Camille Fournier
This is a great “operating” manual for any engineering manager. Starts with coaching and managing ICs, managing teams, teams of teams and goes all the way to answering the question of what the hell is the difference between VPE and CTO? If you’re looking for orientation about various roles and growth paths in the engineering management I couldn’t think of a better book. It’s easy to pick up from any point in the book. You’ll also find a career path/ladder section. It’s a great reference that I always have on my desk.

Smart and Gets Things Done

Smart and Gets Things Done: Joel Spolsky's Concise Guide to Finding the Best Technical Talent
“Smart and Gets Things Done: Joel Spolsky’s Concise Guide to Finding the Best Technical Talent” by Joel Spolsky
One could argue that it’s somewhat dated now, but I think it’s still great if you want to follow a simple rule when hiring:
Is the candidate smart and can they get things done?
Today I would add cultural fit: it’s important because you cannot change someone’s values or personality. The point the book is making is that you should not hire academics that are smart, but never get anything finished. Nor should you hire people who work a lot, but make dubious decisions and constant mistakes.

This book forced me to answer the same two questions after every interview: Are they smart? Can they get things done?
Not sure? Not a good idea to hire.

Accelerate

Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations
“Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations” by Nicole Forsgren, Jez Humble and Gene Kim
Data-driven approach to best practices in engineering. Can you measure good engineering? The main point the book is trying to make is that you can avoid sacrificing quality and focusing on the right things you can increase velocity of your organization. What are those metrics?
Introducing DORA metrics - read more here.
  • Deployment frequency: How often a software team pushes changes to production
  • Change lead time: The time it takes to get committed code to run in production
  • Change failure rate: The share of incidents, rollbacks, and failures out of all deployments
  • Time to restore service: The time it takes to restore service in production after an incident

Should you use these metrics and draw immediate conclusion if you team is amazing or sucks? Metrics without the context are a huge danger, but can trigger valuable investigations and gaining more understanding.
I recommend supplementing this book with Swarmia. I have no affiliation with them. Just a fan.

Team Topologies

Team Topologies: Organizing Business and Technology Teams for Fast Flow
“Team Topologies: Organizing Business and Technology Teams for Fast Flow” by Matthew Skelton and Manuel Pais
For later stage scaling teams and how to think about it. The core concepts of the book are cognitive load, Conway’s law and the “Inverse Conway Maneuver” which translates to: if you need a system with X components then you should have X teams.
From Jacob Kaplan-Moss’s blog which goes in more detail:
The main thesis of the book is to engage in “team-first thinking”:
We consider the team to be the smallest entity of delivery within the organization. Therefore, an organization should never assign work to individuals; only to teams. In all aspects of software design, delivery, and operation, we start with the team.

It covers four common patterns for teams:
  • Stream-aligned teams, that are aligned to a single delivery stream, such as a product or service (what others might call a “product team” or a “feature team”).
  • Enabling teams, specialists in a particular domain that guide stream-aligned teams
  • Complicated-subsystem teams that maintain a particularly complex subsystem, such as an ML model
  • Platform teams that provide internal services like deployment platforms or data services
Again, you can read more here.

An Elegant Puzzle

“An Elegant Puzzle: Systems of Engineering Management” by Will Larson
This one is last because I think it’s better for bigger orgs. It starts with organizations, how to size teams, the types of malfunctions and how to fix them.
Then it gets into processes and various rituals that are common in scaling orgs.
Finally gets into some work principles and culture ending in hiring and career growth.
I couldn’t pinpoint a single thing about why I like this book. It addresses a lot of issues that I’m having right now at Gorgias.

That’s it for now! In the future I will update the above list by adding or removing. I’ll try to keep the list to less than 10.

Word of advice:

Supplement your book reading by meeting leaders in your space, getting an executive coach, listening carefully to your team and customers and mentoring people. There are many ways to learn, books, podcasts and blogs posts is just one way. Arguably not the best way.

A Thousand Brains - book review & summary.

The neocortex: the organ that occupies ~80% volume of the brain while consuming ~20 Watts. In his book, A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins the author tries to illuminate how the cognitive sausage is made by looking at a lot of neuroscience clues and trying to piece them together into a new theory of intelligence.

I’ll try to summarize and make sense of his theory here, the way I understood it, and hope it creates excitement for others who are thinking of reading the book. I highly recommend reading the book yourself and I must warn you that I’m not a neuroscientist or AI person. I’m a software engineer that has an interest in these topics.

Before I start, please note that the book is split into two sections - I’ll focus on the first section because that’s where my main interests lie. The second section of the book is diving into the future of humanity, AI safety, space exploration, and many more interesting topics. I enjoyed all these, but I admit, I wanted more juicy neocortex meat.

I’ve been a fan of Jeff Hawkins since ~2009 when I read his first book, On Intelligence - his HTM theory and what it implied was inspiring to say the least. It was the first attempt that I heard of that tried to create a theory of how the neocortex works. The second book expands on the HTM and attempts to complete it. Armed with a decade+ of AI and neuroscience knowledge at Numenta he develops a new theory of the mind called A thousand brains.

The simplest and shortest way I can explain it is this:

The neocortex holds thousands of models of “objects” made of sensory input and reference frames. These models are learned through sensing and moving. The inference is done by “voting” between concurrent models.

Note that these are all high-level concepts that are useful to explain how it’s all supposed to work. There are no “objects” or “reference frames” in the brain - there are synapses, dendrites, cortical columns, minicolumns, grid and place cells (or at least growing evidence that they exist), axons, etc… but it’s hard to explain how intelligence works by using the “hardware” components alone. It’s like trying to explain how a combustion engine works using quantum mechanics - it’s possible, but it’s not the right level of abstraction. The first part of the book is defining this abstraction and connecting it with the different “hardware” parts of the brain.

Let’s dive into what each of these concepts means:

Models of objects

An object can be an apple, a person, a math equation, language, democracy, love, etc… These “objects” are all the same to the neocortex. The same means that the same building blocks or algorithm (inside cortical columns) is used to represent them. This is remarkable because we don’t normally think of grouping abstract things such as calculus with an apple. Remarkable as it may be, there’s a lot of evidence supporting this claim (more below).

The brain maintains models of these objects. Not one model per object, but thousands of concurrent models per object. These models are physically distributed across multiple regions of the neocortex into cortical columns. A single cortical column can concurrently hold thousands of models. In the human brain, there are an estimated ~150k cortical columns.

A useful analogy is to think of a sum of models of an object (let’s call it object-model) as a big jigsaw puzzle with 1000 pieces. Individual puzzle pieces are located in separate cortical columns distributed across the neocortex and the sum of them makes the object-model representing “math” for example.

These individual puzzle pieces are confusingly called models as well, hence the object-model term I introduced above. Let’s call this partial model: a piece-model.

The brain creates a view of the world using millions of these object-models distributed across ~150k cortical columns. Visualize millions of jigsaw puzzles randomly stacked one on top of each other in ~150k columns with a huge amount of connections between them: as far as I understand it, this is what the theory of a thousand brains looks like. It’s a mess, but if you look at the neocortex you start to think it’s not that bad.

Note that it doesn’t mean that an object-model has one piece-model in each of the 150k columns. It depends on how complex the object in question is.

Reference frames

Now let’s take a single jigsaw puzzle piece or piece-model from a single cortical column and see how it’s supposed to work. A piece-model is composed of 2 parts:

  • Sensory input or features (Ex: color, temperature, sound).
  • Reference frames or refs in short.

These refs are like relative cartesian coordinates of different sensory inputs. Let’s bag this concept for now and let’s dive into an example:

Imagine you have an apple in your hand, what does your brain get as input?

  • There are your fingertips that have spatial coordinates relative to the rest of your body, the pressure relative to “less pressure”, the temperature difference.
  • The weight of the apple felt by your nerves in your arm and shoulder.
  • Your visual system observes the color difference, the corners of the apple and the different distances from your body and compared to other objects in your visual field.

The difference of pressure, temperature and distance between your eyes and the apple: these are all refs.

The example above is pretty static and while we can imagine a model of an apple from a 100ms experience of holding it, this is not how the brain builds these refs and sensory input. Instead, the brain learns both features of objects and their locations over time.

A single cortical column learns complete models of objects by integrating features and locations over time.

The brain learns these refs and features by moving the object around and observing over time. Movement appears to be a key factor of learning models.

The what (sensory input) and the where (reference frames) of each model are tightly coupled inside a cortical column. The refs are “implemented” by place and grid cells together with other parts of the cortical column. The evidence that these grid/place or “location” cells exist in the neocortex is not confirmed yet, but there is growing evidence.

One last detail I want to add about movement + perception is that the “movement” neurons that send the movement commands to the old brain (neocortex doesn’t have direct access to muscles) are also part of cortical columns and are mixed with sensory and location neurons. They are distributed everywhere around the neocortex.

Inference

The inference is done by voting between these models.

Inference, model convergence? Deciding if a chicken is a chicken. Whatever you want to call it.

Once the cortex constructs these distributed models what happens if it tries to identify an object?

Voting!

There are long-distance connections between columns, inside the same cortical region (Ex: visual) and between regions (Ex: visual and touch). I think this image of the Rubin vase here can give you a clue about what happens. Your brain tries to decide between 2 different equally valid models. What’s curious is that you can see either a vase or 2 faces, but not both at once.

Rubin vase

More evidence to justify the theory

Again, read the book because it’s super interesting! I omitted important things about the brain and neocortex, but here are a couple of things worth mentioning:

  • The neocortex looks the same no matter which region you look at, meaning that cortical columns and their structure are similar across regions. It appears Vernon Mountcastle was first to make this observation and if that’s not OG enough he proposed that cortical columns have a common neocortical algorithm.
  • The neocortex looks the same, but not for all things. Language regions appear to be concentrated in certain areas and their connectivity is greater than other parts. Density may be higher, that fundamentals look the same.
  • The evolution of brains looks additive - more complex organisms have bigger and more complex brains. More of the same is the successful evolutionary strategy.
  • The newer the brain the “less specialized” it looks and the “more uniform” it appears and the “more of it” there is. Recall that the neocortex occupies ~80% by volume. That’s a lot of brain and it’s not cheap to run! To think that I’m using it to watch memes on the internet…

What’s next? More questions than answers!

  • It appears that the “motivation” or “goals” of the brain is not set in the neocortex, but in the “old” brain and there is a constant “battle” between the two. The “old” brain wants to eat the marshmallow, but the new one has a model of you on a diet. Is this accurate? If yes, then what is the mechanism of this interaction?
  • Is it confirmed that grid and place cells exist in the neocortex? It’s a key part of the theory.
  • Recursion (around language and other nested concepts) was mentioned, but I’m not sure I understood how it works.
  • How do location cells work with perception cells together?
  • How does the voting happen technically? I get there are connections between columns, but where does the “winning” of the vote happen?
  • How does prediction happen? I think I understood the basics of primed & inhibitory neurons vs un-primed neurons, but how that leads to prediction is unclear.

Closing thoughts

I’m excited about what I learned and eager to follow the reading recommendations at the end of the book. I realize that I may have misinterpreted parts of the book and there are a lot of details - I hope to correct and improve my knowledge as I discover more about the subject. There is a non-zero chance that Jeff’s theory is wrong, but his theory can be tested which is exciting!

Huge respect for having the audacity to attack this fundamentally hard problem. I’d like to express my admiration for Jeff’s persistence, he started this journey the year I was born, in 1986 and he’s still at it! Looking forward to book #3.

Finally I want to leave here a video of Jeff explaining much better what I described in this post.

Introducing kuku: kubernetes template tool

At Gorgias we’re using k8s on gke to run all our production services. We run our REST API apps, RabbitMQ, Celery background workers, PostgreSQL and other smaller services on k8s. We also have staging & development k8s clusters where we experiment with different infrastructure setups and deployments.

If you have multiple k8s clusters to manage, chances are you also need a templating tool to customize your k8s manifests. By far the most popular one is helm these days. There is also ksonnet and more recently pulumi. All of these tools are powerful and solve real problems, but they are not quite right for us.

I can’t speak much about ksonnet and pulumi because I only briefly had a look at their APIs and how-to guides so take it with a grain of salt. However, as a user, I can speak about helm which is what we’ve been using at Gorgias for all our services.

Why not helm?

Well, there are a few things I find problematic with helm:

  • Poor templating language: requires constant referral to the docs, whitespace issues, yaml formatting is hard.
  • Server-side dependency: if you upgrade the server -> every user needs to update their client - waste of valuable time.
  • Lack of local validation: helm lint does not actually ensure the validity (Ex: required keys for a k8s object) of the manifest.
  • Chart names, releases, and other helm specific features do not fit with our current workflow.

At Gorgias, we’ve been using Helm to manage all our k8s resources, and it’s been great until we had to deal with more complex charts with lots of control flow and ranges. If you ever dealt with ranges in Helm and template you might know that it’s not easy to manage considering different contexts. For example template "name" . vs template "name" $ comes to mind.

So why not Ksonnet then?

Ksonnet improves the situation a bit with a stricter approach using the jsonnet language. When I say strict, I mean it doesn’t blindly render a text file into YAML as helm does, but uses a real programming language to render the yaml in the end.

My main issue with it is the language: jsonnet. It mostly has to do with the fact that it is yet another template language that I have to learn and deal with its different gotchas. A separate issue is that it introduces a whole set of new concepts such as Part, Prototype, Parameter, etc… I found that a bit too much when all I want is to render a bunch of YAML files with some variables.

Pulumi?

Pulumi approaches the most to what I would consider the ideal tool for us. It uses a programmatic approach where it connects directly to your cluster and creates the resources declared with code (TypeScript, Python, etc..). You write TS code, and you provision your infra with progress-bar. There is a lot to like about this approach. There are, however, a few things that I don’t like about Pulumi either: the primary language seems to be TypeScript at the moment - which I don’t want to use when it comes to infrastructure code. Python templates were in development when I wrote this post but I didn’t try them.

Pulumi also does infrastructure provisioning (multi-provider) - a la Terraform. I think this is overkill for what we need at Gorgias. We don’t have to use those features of course, but it seems like it tries to solve 2 different and complex problems at the same time. To put it plainly: it’s too much of a swiss army knife for us.

kuku: a simple templating tool

Finally, after searching for the right tool, I decided that I would write my own. kuku is very similar to Helm, but uses python files as templates instead of YAML. It also doesn’t have a server-side component.

Here are some of its goals:

Write python code to generate k8s manifests.

Python is a popular language with a vast ecosystem of dev-ops packages. Most importantly it’s easier to debug than some templating languages used today to generate k8s manifests.

No k8s server-side dependencies (i.e. tiller).

k8s already has a database for its current state (using etcd). We can connect directly to it (if needed) from the client to do our operations instead of relying on an extra server-side dependency.

Local validation of manifests.

Where possible do the validation locally using the official k8s python client.

Use standard tools.

Where possible use kubectl to apply changes to the k8s cluster instead of implementing a specific protocol. Again, this allows for easier maintenance and debugging for the end user.

More on Helm comparison

Compared to Helm there is no concept of charts, releases or dependencies. I found that we have rarely used any of those concepts and they just added extra complexity to our charts without much benefit.

Instead there are just 2 concepts that are similar to helm: values and templates.

Values come from the CLI or value files (same as Helm). Templates are just python files that have a template function.

Using kuku

Suppose you want to create a k8s service using a template where you define the service name, internalPort and externalPort.

To install: pip3 install kuku

Given the following service.py template:

from kubernetes import client

def template(context):
    return client.V1Service(
        api_version="v1",
        kind="Service",
        metadata=client.V1ObjectMeta(name=context["name"]),
        spec=client.V1ServiceSpec(
            type="NodePort",
            ports=[
                {"port": context["externalPort"], "targetPort": context["internalPort"]}
            ],
            selector={"app": context["name"]},
        ),
    )

You can now generate a yaml output from the above template using kuku by running:

$ ls .
service.py 
$ kuku render -s name=kuku-web,internalPort=80,externalPort=80 .

the above produces:

# Source: service.py
apiVersion: v1
kind: Service
metadata:
  name: kuku-web
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: kuku-web
  type: NodePort

You can also combine the above with kubectl apply -f - to create your service on k8s:

kuku render -s name=kuku-web,internalPort=80,externalPort=80 . | kubectl apply -f -

Same as above, but let’s make it shorter:

kuku apply -s name=kuku-web,internalPort=80,externalPort=80 .

Finally to delete it:

kuku delete -s name=kuku-web,internalPort=80,externalPort=80 .
# same as above
kuku render -s name=kuku-web,internalPort=80,externalPort=80 . | kubectl delete -f -

kuku templates

Let’s return to templates a bit because a few things are happening there. Templates are python files that are defining a function called template that accepts a dict argument context and returns a k8s object or a list of k8s objects. Simplest example:

def template(context):
    return V1Namespace(name=context['namespace'])  # example k8s object

You can create multiple template files each defining their own template function. kuku uses the k8s objects (aka models) from official kubernetes python client package. You can find them all here.

When writing kuku templates I highly recommend that you use an editor that is aware of the k8s python package above so you can get nice auto-completion of properties - it makes life some much easier as a result.

kuku command line interface

Similar to helm, kuku accepts defining it’s context variables from the CLI:

kuku render -s namespace=kuku .

-s namespace=kuku will be passed to the context argument in your template function. Run kuku -h to find out more.

A more realistic example

Defining services and a namespace is nice, but let’s see how it behaves with a more complex Postgres StatefulSet. Consider the following directory:

.
├── templates
│   ├── configmap.py
│   ├── service.py
│   └── statefulset.py
├── values-production.yaml
└── values-staging.yaml

We have some value files, a configmap, service (like before) and statefulset template. This postgres statefulset template is something similar to what we have currently in our production at Gorgias.

Let’s have a look at values-production.yaml:

name: pg # global name of our statefulset/service/configmap/etc..

image: postgres:latest

# optional
nodeSelector:
  cloud.google.com/gke-nodepool: pg

replicas: 1

resources:
  requests:
    memory: 10Gi
  limits:
    memory: 12Gi

pvc:
- name: data
  class: ssd
  size: 500Gi
  mountPath: /var/lib/postgresql/data/

configmap:
- name: postgresql.conf
  value: |
    max_connections = 500

Above we’re defining values that are used to declare that we want to run one instance of postgres:latest docker image on a specific k8s node pool while requesting some memory and a persistent volume. We’re also using a config map to define our postgresql.conf so it’s easier to keep track of its changes.

Keep in mind the above values and now let’s have a look at our statefuset.py template:

from kubernetes import client


def template(context):
    # volumes attached to our pod
    pod_spec_volumes = []

    # where those volumes are mounted in our container
    pod_spec_volume_mounts = []

    # persistent volume claims templates
    stateful_set_spec_volume_claim_templates = []

    # only set the claims if we have a PVC value
    for pvc in context.get("pvc"):
        stateful_set_spec_volume_claim_templates.append(
            client.V1PersistentVolumeClaim(
                metadata=client.V1ObjectMeta(
                    name=pvc["name"],
                    annotations={
                        "volume.beta.kubernetes.io/storage-class": pvc["class"]
                    },
                ),
                spec=client.V1PersistentVolumeClaimSpec(
                    access_modes=["ReadWriteOnce"],
                    resources=client.V1ResourceRequirements(
                        requests={"storage": pvc["size"]}
                    ),
                ),
            )
        )
        pod_spec_volume_mounts.append(
            client.V1VolumeMount(name=pvc["name"], mount_path=pvc["mountPath"])
        )

    # same for configmap
    if "configmap" in context:
        volume_name = "{}-config".format(context["name"])
        pod_spec_volumes.append(
            client.V1Volume(name=volume_name, config_map=context["name"])
        )
        pod_spec_volume_mounts.append(
            client.V1VolumeMount(name=volume_name, mount_path="/etc/postgresql/")
        )

    # command to check if postgres is live (used for probes below)
    pg_isready_exec = client.V1ExecAction(command=["gosu postgres pg_isready"])

    return client.V1StatefulSet(
        api_version="apps/v1beta1",
        kind="StatefulSet",
        metadata=client.V1ObjectMeta(name=context["name"]),
        spec=client.V1StatefulSetSpec(
            service_name=context["name"],
            replicas=context["replicas"],
            selector={"app": context["name"]},
            template=client.V1PodTemplateSpec(
                metadata=client.V1ObjectMeta(labels={"name": context["name"]}),
                spec=client.V1PodSpec(
                    containers=[
                        client.V1Container(
                            name="postgres",
                            image=context["image"],
                            lifecycle=client.V1Lifecycle(
                                pre_stop=client.V1Handler(
                                    _exec=client.V1ExecAction(
                                        command=[
                                            'gosu postgres pg_ctl -D "$PGDATA" -m fast -w stop'
                                        ]
                                    )
                                )
                            ),
                            liveness_probe=client.V1Probe(
                                _exec=pg_isready_exec,
                                initial_delay_seconds=120,
                                timeout_seconds=5,
                                failure_threshold=6,
                            ),
                            readiness_probe=client.V1Probe(
                                _exec=pg_isready_exec,
                                initial_delay_seconds=10,
                                timeout_seconds=5,
                                period_seconds=30,
                                failure_threshold=999,
                            ),
                            ports=[client.V1ContainerPort(container_port=5432)],
                            volume_mounts=pod_spec_volume_mounts,
                            resources=client.V1ResourceRequirements(
                                **context["resources"]
                            )
                            if "resources" in context
                            else None,
                        )
                    ],
                    volumes=pod_spec_volumes,
                    node_selector=context.get("nodeSelector"),
                ),
            ),
            volume_claim_templates=stateful_set_spec_volume_claim_templates,
        ),
    )

If you squint a bit you might see that the last return is similar to a yaml file, but it uses python objects instead with all of it’s IFs and for-loop, standard library, etc..

What I find better than a regular helm YAML template is that you can validate some of the input arguments of those python objects (Ex: client.V1Container) even before the template is sent to your k8s server - not to mention autocomplete.

Finally, this is how it all comes together:

kuku render -f values-production.yaml templates/ | kubectl apply -f -

The above renders all your templates and generates the yaml manifests that are then applied using kubectl apply.

You can find the source here: https://github.com/xarg/kuku/ And a couple of examples: https://github.com/xarg/kuku/tree/master/examples

Conclusion

We’ve started using kuku in Sept. 2018 at Gorgias, and we’ve since migrated all our Helm charts to kuku templates. It allowed us to customize our k8s deployment code to our needs much easier than before with minimal deployment surprises.

Hope you find kuku useful as we did. Happy hacking!

My very subjective future of humanity and strong* AI

The fascination with AGI has been mainstream for a long time, but it started having more even more momentum in the recent years. Even hollywood has become less naive with movies like Her and Ex Machina.

On the R&D side there is of course Deep Learning which is a machine learning technique that uses neural networks with 1 hidden layer :P It has changed I believe forever the way people are doing research today. The hype is real because of the state of the art results achieved with it and the way the skills translate across different fields of ML. AlphaGo beats the best player in the world, translation and image/voice recognition is becoming better, artistic style stealing, attention models, etc.. The best part is that it’s more or less the same RNN with different neuron architectures, backprop and gradient decent that works with a broad range of problems. Now people are looking to for nails because they have a damn mighty hammer.

Of course hooking up a bunch of NVidia Pascals is not gonna give us AGI and the Moore’s law is not what it used to be. I could not agree more, but if we overcome the hardware issues (and I have high hopes that AR and VR is gonna push this) then it’s reasonable to assume that we’ll have the hardware to achieve at least weak AI soonish…

What about software? That maybe a bigger problem. But.. I’m also optimistic here with things like torch and recently tensorflow are given ton of attention from one of the best minds in the AI world today. What’s really cool about these frameworks is that they are used everyday in production on real products by startups and big corp alike. They are here to stay. It’s not enough, but I’m hopeful that things will improve.

Ok, so I want to say something that has been bugging me a long time, bare with me, I believe it’s important for the arguments that follow.

… is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can…

Now I have a problem with this definition because I would argue that in a cosmic sense we, the humans, haven’t achieved what I would call general intelligence. We’re kind of good at surviving in the Earth’s atmosphere. We can do many things that are amazing and not accessible to most animals, but we’re still bound to our environment. We’re still I would argue narrow in our intelligence and can only grasp a small fraction of what’s out there. There exists true AGI which is AIXI. It will seek to maximize its future reward in any computable environment (survive and expand), but there is this tiny little problem of requiring infinite memory and computing power in order for it to function. It’s useful just like the Turing machine is useful in the real world. For any intelligent agent to be practical, it’s required a favourable environment and a narrow specialisation for that environment. This is why I think that we’re really after is strongish AI which translates to being pretty cool in your neighbourhood.

Read more...