BUILD YOUR DATA MESH ON APACHE KAFKA USING SPECMESH OS

Specification driven data mesh for the enterprise

Watch this Episode: `SpecMesh - Kafka resources and the AsyncAPI spec` Read the blog post
View the Apache Kafka Quick Start

Data Mesh meets streaming data

For organisations to successfully adopt data mesh, setting up and maintaining infrastructure needs to be easy. We believe the best way to achieve this is to leverage the learnings from building a ‘central nervous system‘, commonly used in modern data-streaming ecosystems. This approach formalises and automates the manual parts of building a data mesh.


SpecMesh combines Kafka best practice, blue prints, domain driven design concepts, data modelling, GitOps and chargeback. But rather than talk about it, we decided to build it!

Visit GitHub to get started.

GitHub Docs

or check out the Source

Features

A developer toolkit for streaming data mesh built on Apache Kafka

Specification Driven

Build your Kafka applications using the industry standard AsyncAPI spec.

Read more

Scale across the organisation

Specs capture 'Aggregates' that represent units of business functionality. This hierarchy incorporates a structure that allows for clear and concise ownership and governance rules.

Read more

Simplified Kafka resource modelling

SpecMesh will model topics, schemas, and permissions as a unified configuration, rather than using the disparate configurations employed by other tools.

Read more

Chargeback support

By modeling a collection of topics under a single app that uses a common hierarchy, SpecMesh is able to report on storage and consumption metrics that can be used to build chargeback systems.

Read more

Self-governance

Topic names can use a '_protected' label, this allows a tag to be incorporated where the application owner can grant permissions to other domain-ids (principals)

Read more

Observability

SpecMesh will support a dataflow-centric visualization of all related specifications (apps), their relationships, as well as producers and consumers (coming soon).

Read more

News, updates and insights

With a decades of insights and expertise, we’re reimagining streaming data so that you can focus on your business.

Valuable Feedback

'I think what you guys are doing is really good. We've used JulieOps at Smart, but found it quite clunky, with neither Infra or Dev wanting to take ownership of it. SpecMesh's focus on data modelling is spot on.' Architect @SmartPensions - London
‘We spent 2 years trying to build this and what your guys have built is better’ Architect @Tier-1 Investment bank - London
‘3 years ago we started evolving to this, instead i wish we could use SpecMesh now’ Manager @One of Europe's largest retailers
‘SpecMesh is much better thought out than our current solution which took our team over 2 years to develop...and we still can't solve chargeback’ Architect @Tier-1 Investment bank - London
‘Why doesn't Kafka have this already? It just makes sense...’ Attendee @Kafka Meetup London
‘We are planning to ditch our broken solution and use this approach,’ Manager @Nordic Shipping firm - Big Data London
‘Self governance and chargeback and modelling using an AsyncApi Spec just makes sense’ Attendee @BigDataLondon
‘We will now use terraform just for server infra, and make SpecMesh GitOps developer led. It just makes sense’ Founder @EdTech startup
‘We currently use JulieOps but need features that it looks like your guys will develop (and it doesn't have)’ Tier-1 Investment bank - London

Frequently Asked Questions

Q. Who created SpecMesh?

It was created by Neil Avery (Ex-Confluent), Sion Smith (OSO DevOps CTO) and Andy Coates (Ex-Confluent)

Q. Can i become a contributor and how?

Yes absolutely, either start a chat via an Issue, Enhancement or PR and we can go from there

Q. Who controls the roadmap?

It's first-come, first-served; however, ultimately, it's the three of us and whoever else is showing interest.

Q. What versions of Apache Kafka does it work with?

It uses the Kafka Admin Client (like all Kafka admin tools - Ansible, Terraform and Kafka scripts). This means it works with Open Source Apache Kafka, AWS MSK, Red Panda and Confluent Cloud and Confluent Platform

Q. Will you extend it for anything else?

It's unlikely - there is too much value to build up the stack

Q. What data catalogue integration will be supported?

It will focus on LinkedIn's DataHub

Q. Who are the companies behind it?

Liquidlabs and OSO DevOps

Q. How can i get in contact with you?

Create an Issue in GitHub and we will be notified - https://github.com/specmesh/specmesh-build/issues