EP86: CAP, BASE, SOLID, KISS, What do these acronyms mean?

This week’s system design refresher:


2023 State of DevOps Report by Google Cloud and LinearB (Sponsored)

Are lofty DevOps ideals translating into better results for companies? Has AI begun to show an impact on software team productivity?

This 2023 report by the DevOps Research and Assessment (DORA) team at Google and LinearB collates research from over 36,000 professionals worldwide, covering:

Now, you can get a free copy of the full report

Get Your Free Copy


CAP, BASE, SOLID, KISS, What do these acronyms mean?

The diagram below explains the common acronyms in system designs.

diagram

Over to you: Have you invented any acronyms in your career?


Latest articles

If you’re not a paid subscriber, here’s what you missed this month.

  1. Does Serverless Have Servers?

  2. A Crash Course in Docker

  3. Shipping to Production

  4. Kubernetes: When and How to Apply It

  5. A Crash Course in Kubernetes

To receive all the full articles and support ByteByteGo, consider subscribing:

Subscribe now


Single Sign-On (SSO) explained in simple terms

logo, company name

The concepts of SSO revolve around the three key players: the User, the Identity Provider (IDP), and the Application.

  1. The end-user or individual who seeks access to various applications.

  2. Identity Provider (IDP): An entity responsible for user authentication and verification. Common IDPs include Google, Facebook, and company-specific systems.

  3. Application: The software or service that the user wants to access. Applications rely on the IDP for user authentication. With SSO, users can seamlessly log in to various applications with a single set of credentials, enhancing convenience and security.

Single Sign-On (SSO) simplifies user access by enabling them to log in to multiple applications with a single set of credentials, enhancing the user experience and reducing password fatigue. It also centralizes security and access management, improving security, streamlining access control, and saving time and costs.

Over to you: What's your perspective on the future of secure authentication in the digital realm?


Imperative Vs Functional Vs Object-oriented Programming

logo, company name

In software development, different programming paradigms offer unique ways to structure code. Three main paradigms are Imperative, Functional, and Object-oriented programming, each with distinct approaches to problem-solving.

  1. Imperative Programming:
    - Works by changing program state through a sequence of commands.
    - Uses control structures like loops and conditional statements for execution flow.
    - Emphasizes on mutable data and explicit steps for task completion.
    - Examples: C, Python, and most procedural languages.

  2. Functional Programming:
    - Relies on pure functions, emphasizing computation without side effects.
    - Promotes immutability and the avoidance of mutable state.
    - Supports higher-order functions, recursion, and declarative programming.
    - Examples: Haskell, Lisp, Scala, and functional features in languages like JavaScript.

  3. Object-oriented Programming:
    - Focuses on modeling real-world entities as objects, containing data and methods.
    - Encourages concepts such as inheritance, encapsulation, and polymorphism.
    - Utilizes classes, objects, and interfaces to structure code.
    - Examples: Java, C++, Python, and Ruby.

Over to you: Which one resonates with your coding style? Ever had an 'aha' moment while using a particular paradigm? Share your perspective.


Data Pipelines Overview

No alt text provided for this image

Data pipelines are a fundamental component of managing and processing data efficiently within modern systems. These pipelines typically encompass 5 predominant phases: Collect, Ingest, Store, Compute, and Consume.

  1. Collect:
    Data is acquired from data stores, data streams, and applications, sourced remotely from devices, applications, or business systems.

  2. Ingest:
    During the ingestion process, data is loaded into systems and organized within event queues.

  3. Store:
    Post ingestion, organized data is stored in data warehouses, data lakes, and data lakehouses, along with various systems like databases, ensuring post-ingestion storage.

  4. Compute:
    Data undergoes aggregation, cleansing, and manipulation to conform to company standards, including tasks such as format conversion, data compression, and partitioning. This phase employs both batch and stream processing techniques.

  5. Consume:
    Processed data is made available for consumption through analytics and visualization tools, operational data stores, decision engines, user-facing applications, dashboards, data science, machine learning services, business intelligence, and self-service analytics.

The efficiency and effectiveness of each phase contribute to the overall success of data-driven operations within an organization.

Over to you: What's your story with data-driven pipelines? How have they influenced your data management game?