Back after another week off—so we've got the best articles from the past two weeks. Several interesting new things to checkout this week—Bigslice and Bigmachine from GRAIL, an interesting strategy for turning change data capture events into audit events on the Debezium blog, and the SLOG system that aims to provide low-latency and strict serializability for multi-region systems. Lots more good stuff—posts on data pipelines, a look at new features in PostgreSQL 12, and auto scaling for Apache Airflow.
GameChanger writes about how they've (mostly) automated loading of data from their data pipeline to the data warehouse. Some friction comes from defining the schema in the data warehouse. A new tool was written to create definitions based on the Avro Schema from the Confluent Schema Registry.
The kafkacat CLI tool can be used for quick to setup (but not production ready) replication between Kafka clusters/topics. This post describes how to invoke it, and what some of the caveats are.
The Debezium blog shares the details of implementing a fascinating technique for building an audit log using change data capture data. The general idea is to to populate a secondary table keyed on transaction id with the details of the JWT that was used to perform the transaction. Then, Apache Kafka Streams is used to join the data between CDC streams for those tables. The post dives into how to build out this type of system in full detail (e.g. lots of sample code showing how to build 1) a JAX-RS Interceptor to automatically populate the table based on the JWT and 2) the Kafka Streams application).
PostgreSQL 12 was released a little over a week ago. The announcement describes some of the features (lots of performance improvements), and a second post on pgdash.io describes a new feature, generated columns. There are some interesting use cases for these, such as normalizing text data for searches.
GRAIL has open sourced Bigslice and Bigmachine, which enable distributed computation across large datasets using simple Golang programs. Unlike other big data tools, Bigslice spins up EC2 instances at runtime to distribute your computation. It exposes a high-level programming model (e.g. Map, Join, Filter) for batch processing. The introductory blog post and the github project have many more details, including how to get started (looks quite easy!)
I can't tell you how many times I've seen a syntax error because I tried to reference a table/column in a particular part of a SQL query. This post conveys when it's OK to cross-reference columns/tables defined in other components of a SQL query. There's a good cheat sheet if you might find it useful as a reference.
For those interested in distributed systems at global scale—this post dives into SLOG, which is new system designed to offer low latency and strict serializability by taking advantage of locality in client access patterns. The post gives a good introduction to the high-level intuition and system design, and if you want more the full VLDB paper is linked.
Facebook's scribe is a high throughput (2.5TB per second at peak) system for capturing log data. This post shares the high-level design of the system—covering topics like availability (e.g. buffering data to local disk in case of network issues), scalability, and multitenancy.
LinkedIn has open sourced the version of Apache Kafka that they run in production across thousands of brokers—they base on Apache Kafka release branches and add changes. The post talks about some of the improvements they've made, like better scalability by reusing UpdateMetadataRequest objects and a maintenance mode that makes it easier to cleanly take down a broker. The post also describes their development process, and how they integrate with the Apache Kafka upstream project.
A look at how to ensure you're getting the best performance out of postgres (things like partial indexes and increasing the shared buffer cache) as well as some advanced features you might not have known about like text search, geospacial indexes, hstore for key/value data, and JSON/XML data types.
A look at using the Kubernetes HorizontalPodAutoscaler to autoscale the workers of an Apache Airflow deployment. While the post has some details that are specific to Google Cloud Composer (a managed service for Apache Airflow), if you're interested in autoscaling your Airflow workers, this looks like a good place to get started.
Convoy writes about how the improved the latency of data loads to their data warehouse using KafkaConnect to load JSON data from Postgres via Debezium to Snowflake. The post has lots of practical details on deploying a production pipeline of this style.
Curated by Datadog ( http://www.datadog.com )
Hadoop Rising: The Evolving Ecosystem (Boulder) - Thursday, October 17
Full-Day Apache Cassandra and Kafka Workshop (Chicago) - Thursday, October 17
Kafka & KSQL (Columbus) - Tuesday, October 15
Apache Beam Meetup 8: Streaming SQL in Beam + Beam Use Case by Huq Industries (London) - Wednesday, October 16
Apache Beam Meetup 2: Portability, Beam on Spark, and More! (Paris) - Thursday, October 17
Berlin AWS Group Meetup (Berlin) - Tuesday, October 15
Apache Kafka at Deutsche Bahn & Confluent Cloud (Frankfurt) - Wednesday, October 16
Managing Data Flows: Apache NiFi Deep Dive + Streaming Use Cases (Vienna) - Thursday, October 17
First Warsaw Airflow Meetup (Warsaw) - Thursday, October 17
MQTT and Apache Kafka: A Case Study of Uchumi Commercial Bank-Tanzania (Nairobi) - Saturday, October 19
Open Source Technologies at Expedia (Bangalore) - Wednesday, October 16
Apache Kafka and Microservices (Singapore) - Thursday, October 17
Viktor Gamov and George Hall Talk Kafka, Kubernetes, Connectors, and Operator (Docklands) - Tuesday, October 15
Kafka on Kubernetes: Does It Really Have to Be “The Hard Way”? (Sydney) - Thursday, October 17
FinTech Production with Kafka Streams (Melbourne) - Thursday, October 17
Links are provided for informational purposes and do not imply endorsement. All views expressed in this newsletter are my own and do not represent the opinions of current, former, or future employers.