Gophercamp2026

Sessions

Discover the talks, workshops, and lightning talks at Gophercamp 2026

17 sessions confirmed

This list is not final and subject to change.

Observability-Driven Development: Why 99.9% uptime doesn't mean your product works

Abstract Your users are leaving before you know they had problems. A slow signup flow, a failing payment endpoint, or a broken onboarding step. By the time you hear about it from support tickets, you've already lost trust and revenue. Most Go applications start with great intentions: fast iteration, clean code, and rapid shipping. But without the right observability foundations from day one, teams end up flying blind. Metrics live in one place, logs in another, and there's no way to connect a spike in error rates to actual user impact. In this talk, I'll share hard-won lessons from building production systems at scale and show you how to instrument Go applications with user journeys at the center. You'll learn how to build a minimal, effective observability stack using OpenTelemetry, connect technical signals to business outcomes, and establish SLOs that Product and Engineering can co-own. This is not a talk about adding more dashboards. This is about shipping fast with confidence. What You'll Learn 1. Why observability is a Day 1 decision - The cost of flying blind: churn, firefighting, and lost roadmap time - How to measure user outcomes, not just server health - The difference between good and great early-stage observability 2. Building the minimal observability stack in Go - Instrumenting with OpenTelemetry: metrics, traces, and structured logs - Choosing the right backends: Prometheus, Tempo, Loki (or managed alternatives) - Connecting technical signals: from metric spike → trace → log → user impact - Practical Go patterns: middleware, context propagation, and sampling strategies 3. Making SLOs about user journeys - Defining SLIs/SLOs for core flows (signup, checkout, onboarding) - Shared ownership between Product & Engineering - Using error budgets to balance speed and reliability - Release guardrails: detecting regressions in minutes, not hours Target Audience - Go developers at startups or scale-ups who want to build observability from the ground up - Engineering leads balancing velocity with reliability - Product Engineers who need to understand user impact, not just server metrics - Anyone who has debugged production issues by guessing Prerequisites: Basic Go experience. No prior observability knowledge required.

Transpiling Go to Rust and Others

Abstract Goany (https://github.com/pdelewski/goany) is a transpiler that takes a subset of Go and compiles it to C++, C#, Rust, JavaScript, and Java, allowing developers to write portable libraries once using familiar Go syntax and tooling. In this talk, attendees will learn why Go's simplicity makes it uniquely suited as a transpilation source, how to design a minimal yet practical language subset for cross-language compatibility, and what trade-offs arise when mapping Go constructs to fundamentally different languages. Talk Description Have you ever implemented the same logic twice because one component was written in Go and another in a different language? This is a common problem in polyglot systems, and existing solutions — C bindings via FFI, IDL-based code generation, or transpilers like HAXE — each come with significant trade-offs: adoption barriers, serialization overhead, or the need to learn an entirely new language. This talk introduces a different approach: using Go itself as the source language for cross-language transpilation. https://github.com/pdelewski/goany is not a theoretical exercise — it is a working project that already transpiles Go code to five target languages. During this session, I will show practical examples of transpiled code, including a C64 emulator and a GUI demo running in the browser, and walk through the design decisions behind building such a tool. But building a transpiler is only half the challenge — the other half is deciding what to transpile. I will cover: - The problem: why reusing logic across language boundaries remains painful, and why existing approaches (FFI, Protobuf/gRPC, HAXE, FusionLang) fall short in certain scenarios - Why Go is a great fit: its small, well-defined spec, comprehensive standard library, and strong tooling make it an ideal transpilation source — unlike custom languages, every goany program is a valid Go program - Designing the subset — what to include and why: the guiding principle behind goany's subset is to include only constructs that produce nearly one-to-one correspondence between input and output in every target language. I will walk through the reasoning behind what made the cut (primitives, slices, structs, multiple returns, methods, loops, conditionals) and what was deliberately left out (goroutines, generics) with examples of how each decision affects the generated code. The subset is intentionally a work in progress — it started minimal and is gradually expanding as new constructs prove they can be mapped cleanly across all targets - How goany works: a walkthrough of the transpiler architecture — parsing Go's AST, mapping the chosen subset to target language constructs, and producing readable output in C++, C#, Rust, JavaScript, and Java - Live demos: a C64 emulator and a GUI application running in the browser via JavaScript transpilation, demonstrating that this focused subset is already sufficient to build non-trivial, real-world applications - Challenges: dealing with memory model differences, producing idiomatic output, and the ongoing tension between expanding the subset and maintaining clean transpilation The key insight of this talk is that designing a transpiler subset is not about supporting as many features as possible — it is about finding the largest common denominator across target languages, and evolving that denominator over time as you learn what works. Attendees will walk away with a concrete framework for thinking about cross-language compatibility, practical knowledge of building tools on top of Go's AST packages, and a new perspective on how Go's deliberate simplicity enables use cases that more complex languages cannot.

Ultimate Private AI

This is a hands-on, full-day workshop where you'll go from zero to running open-source models directly inside your Go applications — no cloud APIs, no external servers, no data leaving your machine. You'll start by loading a model and running your first inference with the Kronk SDK. Then you'll learn how to configure models for your hardware — GPU layers, KV cache placement, batch sizes, and context windows — so you get the best performance out of whatever machine you're running on. With the model tuned, you'll take control of its output through sampling parameters: temperature, top-k, top-p, repetition penalties, and grammar constraints that guarantee structured JSON responses. Next you'll see how Kronk's caching systems — System Prompt Cache (SPC) and Incremental Message Cache (IMC) — eliminate redundant computation and make multi-turn conversations fast. You'll watch a conversation go from full prefill on every request to only processing the newest message. With the foundation solid, you'll build real applications: a Retrieval-Augmented Generation (RAG) pipeline that grounds model responses in your own documents using embeddings and vector search, and a natural-language-to-SQL system where the model generates database queries from plain English — with grammar constraints ensuring the output is always valid, executable SQL. Each part builds on the last. By the end of the day, you won't just understand how private AI works — you'll have built applications that load models, cache intelligently, retrieve context, and generate code, all running locally on your own hardware.

Separate Ticket Required

Understanding Escape Analysis in Go - How Variables Move Between Stack and Heap

As a seasoned Go developer responsible for developing and maintaining a registrar backend to handle connections with about 40 registries and about 1k req/min I have to make sure the system is always well handled and performant. In order to make sure of it, escape analysis is a key part in it that I had to consider in it. - Why Escape Analysis Matters The Go compiler automatically decides where variables lives on the stack (fast, automatically freed) or heap (managed by GC, slower). While many developers never think about memory allocation, understanding escape analysis can be crucial for performance-sensitive code paths. Excess heap allocations increase garbage collection pressure and can slow down applications. - What Escape Analysis Is Escape analysis is a static compiler optimization that analyzes whether a variable can safely be kept on the stack. If a variable’s address escapes the function scope for example because it is returned or stored for later use the compiler must allocate it on the heap. - Code Walkthrough with Examples We’ll explore key patterns that force heap escapes or keep data on the stack: 1. Simple value return vs pointer return 2. Passing pointers and how this affects escape decisions 3. Why local pointers sometimes don’t escape Each example will include the -gcflags="-m" output to show the compiler’s reasoning. - How to Inspect Escape Behavior in Your Code Attendees will learn how to use `go build -gcflags="-m"` to see escape analysis annotations. We’ll interpret compiler messages and explain how they map to code behavior. - Practical Tips to Reduce Unnecessary Heap Allocations Beyond theory, the talk will cover actionable advice: 1. Prefer returning values instead of pointers when possible 2. Be mindful of interfaces and closures that may cause escapes 3. Understand allocations in hot paths and optimize where it matters