KubeCon 2023
The North American edition of the largest Container-focused conference sponsored by the Cloud Native Computing Foundation (CNCF)wrapped up in Chicago this week. The conference started off focusing on the container ecosystem but has continued to broaden its appeal. CNCF typically runs it in parallel with its “CloudNativeCon” though I’m not sure the need for that separate branding.
This year’s hot new topic was Platform Engineering. Umm, what is Platform Engineering? The term has started to pop up in news and discussions around infrastructure software though it has been around for at least a couple of years. At the highest level I see it as an umbrella discipline merging DevOps, DevSecOps, and Site Reliability Engineering (SRE). In fact, I would take that a step further and make the claim that it could potentially encompass the broader set of tools and responsibilities that fall under ITOps and AIOps. Here’s a definition from Humanitec via Platform Engineering–
Platform engineering is the discipline of designing and building toolchains and workflows that enable self-service capabilities for software engineering organizations in the cloud-native era. Platform engineers provide an integrated product most often referred to as an “Internal Developer Platform” covering the operational necessities of the entire lifecycle of an application. An Internal Developer Platform (IDP) encompasses a variety of technologies and tools, integrated in a manner that reduces cognitive load on developers while retaining essential context and underlying technologies. It helps operations structure their setup and enable developer self-service. Platform engineering done right means providing golden paths and paved roads that match the preferred abstraction level of the individual developer, who interacts with the IDP.
KubeCon had a lot more to chew on. If you were unable to make it, CNCF makes videos from keynotes and other talks available on its YouTube channel. Presentations from all the talks are available here.
OpenAI Dev Day

OpenAI had a blockbuster week with breathless coverage across the tech-sphere on how this changes everything. I can’t argue with that – I do think we are seeing the evolution of a very special and powerful technology. It is a time of breathless excitement and if you are not paying attention, you should. Here are the key announcements that caught my eye –
- GPT-4 Turbo, a more powerful version of the company’s flagship LLM GPT-4. The updated model has a knowledge cut-off date of April 2023 (vs. September 2021 for GPT-4)
- Users can now create custom versions of ChatGPT (confusingly called “GPTs”) that can be directed to solve specific problems. For example, I could create a GPT to design logos, generate haikus or write children’s bedtime stories
- On a related note, you can publish your GPTs to the OpenAI sponsored GPT store, potentially opening up monetization opportunities for creative applications of OpenAI’s platform while allowing curation and a level of security and quality control
- While the consumer-friendly announcements (custom GPT and GPT store) got a lot of media attention, I think the more interesting announcements were around model customization. GPT-4 fine-tuning requires a lot more work to get substantially improved results over the base model, unlike GPT-3.5 fine-tuning. OpenAI will gradually provide access to GPT-4 fine-tuning as performance improves.
- Model Customization aimed at large enterprises sitting on extremely large data sets (billions of tokens), and looking to create / train their own models can now tap into some of OpenAI’s human intelligence – […we’re also launching a Custom Models program, giving selected organizations an opportunity to work with a dedicated group of OpenAI researchers to train custom GPT-4 to their specific domain. This includes modifying every step of the model training process, from doing additional domain specific pre-training, to running a custom RL post-training process tailored for the specific domain. Organizations will have exclusive access to their custom models…]
Databricks
Two months ago Databricks announced a $500mm Series I funding round at a $43bn valuation. The round was led by by T. Rowe Price along with new investors Capital One Ventures, Ontario Teachers’ Pension Plan and NVIDIA. Today, the company announced additional closings in the Series I round with new investors Existing investors Amazon Web Services (AWS), CapitalG, and Microsoft, also participated. It is unclear how much additional capital is being provided by the new investors. Databricks has raised a total of over $4bn and has been an active acquirer in the data ecosystem. They acquired MosaicML for $1.3bn in June 2023 and Arcion for $100mm last month.
Flip AI
Flip AI, which calls itself the GenAI Native Observability Companylaunched out of beta earlier this week. The company has trained a Large Language Model specifically for Observability, giving it the ability to interpret observability data across metrics, events, logs, and traces. The LLM sits behind an application (“Flip”) that allows engineers to debug issues using the observability data they already have. Flip interfaces with existing Enterprise platforms including Datadog, Splunk and New Relic; open source solutions like Prometheus, OpenSearch and Elastic; and object stores like Amazon S3, Azure Blob Storage and GCP Cloud Storage.
In addition to its launch, the company also announced a $6.5mm seed round led by Factory, Morgan Stanley Next Level Fund and GTM Capital.