Erik Winter - Go Developer
Creative, analytical minimalist. Always trying to use less and do more.
- 8 years of professional Go experience
- 5 years of management experience
- 25 years experience in developing backends for the internet
- Part of 3 successful start-ups
I am an experienced backend developer that flourishes in non-standard situations. The times when a new way of doing things must be invented because the standard way doesn't work any-more, or simply because it doesn't exist yet.
Often, the building blocks are already mostly there. They just haven't been recognized yet.
Contracting and Freelance
Current status: Available for work.
If you think I can be of help, you can reach me at:
Based in the Netherlands, timezone UTC/GMT +2:00
.
Technical Skills
Programming Languages
Go, Python, PHP, JavaScript, Ruby
Cloud & Infrastructure
Amazon Web Services (AWS), Google Cloud Platform (GCP), Docker, Kubernetes, RabbitMQ, Pulumi, Auth0, GitLab CI/CD, GitHub Actions, AWS AppRunner, AWS EC2, AWS Batch, AWS Lambda, AWS S3, GCP Cloud Run, GCP Cloud Storage, Datadog, ELK, Sentry, New Relic, Debian, Linux, MongoDB Atlas
Databases & Storage
PostgreSQL, MongoDB, AWS Neptune, Redis, SQlite, MySQL/MariaDB, Elasticsearch
Patterns, Protocols, Formats & Frameworks
REST, Microservices, SQL, NoSQL, CI/CD, git, middleware, ODRL, turtle, RDF, SPARQL, WebSockets, Clean Architecture, OpenAPI, JSON-LD, JWT, TDD, SCRUM, DDD, MVC, Rails, CQRS, OOP, ISTQB, Risk-Based Testing, BDD, API, CLI, CRUD, CSS, CSV, HTML, JSON, PDF, RFC, Selenium, Tricentis Tosca
Working experience
Senior Go Developer at DeonticData
July 2022 - May 2025, contract, remote
DeonticData is a start-up that aims to automate data compliance in the financial industry, by converting contracts and price lists into machine-readable representations, using a format called ODRL.
Developed a REST/JSON API in Go for an AWS Neptune RDF triple store
ODRL is built on RDF because a knowledge graph is well-suited to express the rich data structures found in contracts. A populated graph can answer questions like "Under what conditions is this action allowed, and are they met?" But there is also a great need to manage all the different entities that are involved, like actions, permissions, duties, parties, prices, etc.
The challenge in managing all these entities was that the chosen graph store, AWS Neptune, does not offer any data integrity functionality. There is no schema, there are no transactions, etc. There is certainly no ORM for it. It really is just a big bucket of triples.
To help rapid development of various web applications, I developed a Go backend that directly interfaces with the SPARQL endpoints of AWS Neptune and that can be used as a simple REST/JSON CRUD store for the ~120 different types of ODRL entities. It rigorously validates the entities and makes sure the data always adheres to the model. It handles concurrent requests, uses JWT tokens for authentication, etc. It behaves like any other REST/JSON API, but if desired, it can also respond in JSON-LD.
Speedy iteration was achieved by constructing a fully automated CI/CD pipeline that used multiple interacting Docker containers, running in GitHub Actions, with thousands of unit tests and broad integration tests. The backend was automatically deployed to an AWS AppRunner instance in either a dev or prod environment. Identity management was handled by Auth0.
Developed a Go service to serve citations and run Python AI pipelines
The main source of contract and pricing information were PDFs. Not thousands of them, but enough to make it very impractical to process them locally and share the results by sending around CSV files. In addition, it was necessary to provide references for the data that was extracted from these PDFs by locally running Python AI pipelines to the original documents, so the results could be verified.
Huge gains in productivity were possible by moving these functions to a central cloud platform and automating them as much as possible. However, this was an already running, essential, but also still changing process. Taking time to build a complete new platform was impractical. Instead, I proposed a bottom-up approach, where everyone could continue to work, but where step by step functions would be moved and/or automated.
Steps taken in the first phase:
- Define a file hierarchy that expresses types of documents, their relations, and their different stages of processing
- Move all documents to an AWS S3 bucket following that hierarchy
- Convert the Python AI pipelines into AWS Batch processes
- Develop a Go service on AWS AppRunner that
- performs sanity checks on the files
- triggers the AI pipelines
- imports the resulting references into a PostgreSQL database
- serves the references and the documents through an API
A colleague developed the frontend that would load the relevant documents and highlight the source of a data point. Full CI/CD was again implemented with Docker and GitHub Actions. Identity management was again handled by Auth0.
Prototyped several AI chatbots for contract information
There was a great interest in having a LLM-powered chatbot that could be queried for contract data, even though LLMs can answer with incorrect information.
A series of prototypes was developed to find out the setup that would provide the most accurate answers. Current LLMs can handle a large enough context window to hold complete contract information in turtle format. Different setups were tried out:
The prototypes focused on limited data sets. Scaling them up was postponed in anticipation of the coming Google AgentSpace (GCP).
Senior Go Developer at PublicSonar
May 2020 - June 2022, permanent, remote
PublicSonar was a startup that provided real-time information from social media to emergency planning and response teams of public safety providers (police, fire brigades, etc.)
They developed a SaaS platform, powered by 75+ Microservices, mostly Go, some Python, all running in Docker containers, orchestrated by Kubernetes, and communicating through RabbitMQ, that processed around 7 million messages per day. Main storage was provided by 14 sharded MongoDB instances, supplemented with services like Elasticsearch and GCP Cloud Storage. Several AWS SageMaker ML services provided summarisation, sentiment analysis, etc. GitLab CI/CD was used for unit testing and CI.
In 2024, PublicSonar was acquired by Maltego.
Designed and implemented integration of ML entity recognition
The data scientists developed more advanced forms of aggregation with entity recognition. For instance, from a tweet reporting on a robbery, a collection of AWS SageMaker services could infer the time, location, perpetrator, method, and victim.
To present this information in the frontend, the challenges to overcome were:
- The AWS SageMaker services required careful batching for efficient processing
- The datamodel used in communication with the frontend was incompatible with this change
For the backend, I used a Pipes and Filters architecture that initially used Go channels to move data from one stage to the next. Later I migrated a few of them to RabbitMQ queues for better resilience and to simplify monitoring.
The frontend incompatibility was solved by facilitating regular communication with the frontend team and iterating together, achieving a seamless transition to a new data model.
Led effort to migrate ~50 Microservices to the official MongoDB driver
When the company started to use MongoDB as a storage backend, there was no official Go driver, and there weren't plans to release one. As a result, the community created several drivers, among which GlobalSigns mgo, which was used in about 50 services on the platform.
Later, Mongo did release an official driver, and when the company wanted to switch to MongoDB Atlas for managed hosting, all services had to be converted to that driver, as this was a requirement of the new service.
The two drivers were incompatible, and all ~50 services used it all in slightly different ways, making it impossible to automate the conversion. The process was helped by setting up a wiki with extensive documentation on what to do in each situation and a growing list of examples, but in the end, it was simply grunt work done with a small group of developers.
These days, it would probably be an excellent job for AI.
Introduced an RFC-style process for making architectural decisions
The company was growing, but not all processes had matured at the same speed. Major technical decisions were still done the same way as when the company started: a Slack video call where everyone who might be interested, however remotely, was invited to join. There was little to no preparation for these meetings. Someone simply stated a question or a problem at the start, and brainstorming would ensue.
As can be imagined, this resulted in very haphazard decision-making.
Feeling unsatisfied, I researched how other projects that worked remotely and asynchronously did their architectural design and was inspired by the RFC process used by the creators of the internet itself.
After getting permission to pursue the idea was given by management, I designed and implemented a light-weight version of it. This was met with great enthusiasm and success. The introduction of the system kept showing up in surveys on employee satisfaction for the coming years as an example of a positive change in the company.
Senior Go/Ruby Developer at Sentia
January 2018 - April 2020, permanent, hybrid
Sentia was a European managed hosting provider with locations in the Netherlands, Belgium, Denmark, and Bulgaria, with over 300 cloud specialists taking care of hosting needs on private and public clouds.
In 2022, Sentia was acquired by Accenture.
Developed SLA-aware middleware solution for handling monitoring events
There were a variety of monitoring tools, used both on public and private clouds, that needed to be connected to notification services in an SLA-aware manner. If an incident happened in the middle of the night, but the customer had an SLA that specified service windows would start at 9:00 AM on workdays, the notification had to be delayed until the next day, as to not needlessly page the on-call SRE while sleeping.
The middleware was developed in Go and used MongoDB for storage and synchronisation between data centers. The solution used RabbitMQ for queueing and could be called from the monitoring tool via a webhook, or a local CLI app could be used to forward the events.
This was done by a team of two and was my first taste of production-level Go, with a distributed service, extensive automated tests, and full CI/CD with GitLab CI/CD.
Maintained the homegrown Ruby on Rails app that did everything
The system was responsible for ticketing, invoicing, a CMDB, time tracking, some CRM, and much more.
Earlier working experience
Role | Duration | Company | Details |
---|---|---|---|
Senior PHP Developer | 2 years | Dimensional Insight | DDD, CQRS, Clean Architecture |
Software Architect | 1 year | Medicore | MVC, Clean Architecture |
Test Automation Lead | 1 year | Medicore | ISTQB, Tricentis Tosca |
Test Coordinator | 1 year | Medicore | ISTQB, Risk-Based Testing, Selenium |
Manager Development | 5 years | eFocus | Scaled team from 8 to 35, introduced SCRUM, introduced .Net |
Senior PHP Developer | 2 years | eFocus | PHP, OOP, JavaScript |
Web Developer | 5 years | eFocus | PHP, JavaScript, HTML, CSS |
Education
Physics, Universiteit van Amsterdam, 1993 - 1999