Hi, I'm
7+ years of experience building, scaling, and optimizing backend systems. Currently at Virtuagym in the Netherlands.
As a Backend Software Engineer with 7+ years of experience, I've built, scaled, and optimized systems that power real-world applications. I specialize in Python-based backend development using Django and Flask, and have deep expertise in cloud infrastructure with AWS.
I've worked across distributed, remote-first teams — from startups to established companies — delivering high-quality software with a focus on clean architecture, performance, and reliability. My experience spans full-stack development, DevOps, data engineering, and API design.
Beyond coding, I'm passionate about mentorship, continuous learning, and contributing to the developer community. I write technical articles and actively pursue new technologies to sharpen my craft.
Utrecht, Netherlands
Backend Engineer at Virtuagym
7+ Years in Software Engineering
English & Swahili (Native)
Built an ETL pipeline providing temperature, population, and immigration statistics for different cities. Extracts data from various datasets, transforms it using Apache Spark, uploads to Redshift, and automates the workflow with Apache Airflow.
A collection of data engineering projects from Udacity's Data Engineer and Data Streaming Nanodegree programs, including data lakes, data warehouses, data modeling, and streaming pipelines.
A Python API that allows users to create, view, and update their list of favorite movies. Built with Python Flask, integrated with MongoDB and Postgres, with SQLAlchemy ORM and continuous deployment on Heroku.
An API that enables users to track TV shows they're watching — search for shows, view episode details, log watched episodes, tag favorites, and get suggestions for similar shows.
An API that provides real-time Coronavirus data by country, built with JavaScript and designed for easy integration with frontend applications.
Vice-Chancellor's Scholarship Award • Best Academic Performance Award
Covered data streaming systems, real-time data ingestion with Apache Kafka and Spark, and stream processing with the Faust Python library and Confluent Kafka.
Learned to create scalable data warehouses, work with big data technologies, build cloud-based data lakes, and automate data pipelines with Spark, Airflow, and AWS.
I'm always open to discussing new opportunities, interesting projects, or just connecting with fellow developers. Feel free to reach out!