Skip to content

Articles

Tutorials and articles on databases, communication protocols, and IoT technologies.

Understanding the Liskov Substitution Principle in Python: A Comprehensive Guide

In the realm of object-oriented programming, the Liskov Substitution Principle (LSP) stands as a fundamental pillar, playing a crucial role in ensuring robust and maintainable software designs. Named after computer scientist Barbara Liskov, this principle is integral to the SOLID principles, a set of guidelines aimed at promoting scalable, understandable, and flexible coding practices. This article delves into the essence of LSP, offering insights into its practical application in Python programming.

What is the Liskov Substitution Principle?

LSP asserts that objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program. In simpler terms, if class B is a subclass of class A, then wherever class A is used, class B should be able to serve as its substitute without introducing errors or unexpected behavior.

The principle is not just about method signatures and return types but also about ensuring that the subclass maintains the behavior expected by the superclass. It's about adhering to the contract established by the superclass.

Why is Liskov Substitution Principle Important?

The significance of LSP lies in its ability to ensure that a software system remains easy to understand and modify. By adhering to LSP, developers can:

  1. Enhance Code Reusability: Ensuring subclasses remain true to their parent classes' behavior makes them more versatile and reusable.
  2. Improve Code Maintainability: LSP-compliant code is generally cleaner and more organized, making it easier to maintain and extend.
  3. Reduce Bugs and Errors: By maintaining consistent behavior across a class hierarchy, LSP helps in avoiding bugs that can arise from unexpected behavior of subclasses.

LSP in Python: A Practical Guide

Example of Python Code Violating LSP

Here's a Python example that illustrates a violation of the Liskov Substitution Principle (LSP):

class Bird:
    def fly(self):
        return "I can fly!"

class Penguin(Bird):
    def fly(self):
        raise NotImplementedError("Cannot fly!")

def let_bird_fly(bird: Bird):
    print(bird.fly())

# Usage
blue_bird = Bird()
let_bird_fly(blue_bird)  # Works fine

happy_feet = Penguin()
let_bird_fly(happy_feet)  # Raises NotImplementedError

In this example, the Bird class has a method fly(). The Penguin class, a subclass of Bird, overrides the fly() method but changes its behavior drastically by raising a NotImplementedError, indicating that penguins cannot fly. This violates LSP because Penguin objects cannot be used as substitutes for Bird objects without altering the correctness of the program, specifically in the let_bird_fly function. This function expects any Bird object (or its subclass) to fly, but Penguin breaks this expectation by changing the behavior of the fly() method.

Refactoring the Code to Adhere to LSP

To refactor the provided Python code to adhere to the Liskov Substitution Principle (LSP), we need to restructure the class hierarchy so that subclasses of Bird only extend it if they can fulfill its contract (i.e., if they can implement all its methods without changing their expected behavior). Here's how the refactored code would look:

class Bird:
    def move(self):
        return "I can move!"

class FlyingBird(Bird):
    def fly(self):
        return "I can fly!"

class Penguin(Bird):
    # Penguin doesn't override fly method
    pass

def let_bird_fly(bird: FlyingBird):
    print(bird.fly())

# Usage
eagle = FlyingBird()
let_bird_fly(eagle)  # Works fine

happy_feet = Penguin()
# let_bird_fly(happy_feet)  # This line will now result in a type error, not a behavior error

In this refactored version, Bird is a general class without the fly method. We create a new subclass FlyingBird that includes birds capable of flying and thus implements the fly method. This structure ensures that only flying birds are passed to functions or scenarios where flying is expected, adhering to LSP by ensuring that subclasses can be used interchangeably with their base class without altering the program's correctness.

LSP in Action: Practical Python Example

To demonstrate the Liskov Substitution Principle (LSP) in action, consider a payment processing system in Python. This example will show how subclasses can be used interchangeably with their base class without changing the program's behavior, adhering to LSP.

class Payment:
    def process_payment(self, amount):
        raise NotImplementedError

class CreditCardPayment(Payment):
    def process_payment(self, amount):
        return f"Processing credit card payment for {amount}"

class DebitCardPayment(Payment):
    def process_payment(self, amount):
        return f"Processing debit card payment for {amount}"

class PayPalPayment(Payment):
    def process_payment(self, amount):
        return f"Processing PayPal payment for {amount}"

def process_transaction(payment_method: Payment, amount):
    print(payment_method.process_payment(amount))

# Usage
credit_payment = CreditCardPayment()
process_transaction(credit_payment, 100)

debit_payment = DebitCardPayment()
process_transaction(debit_payment, 100)

paypal_payment = PayPalPayment()
process_transaction(paypal_payment, 100)

In this example, each subclass (CreditCardPayment, DebitCardPayment, PayPalPayment) of the Payment class implements the process_payment method. The process_transaction function can operate with any of these payment methods, demonstrating that subclasses can be substituted for the superclass (Payment) without affecting the function's behavior, thus adhering to LSP.

Conclusion of LSP in Python

The Liskov Substitution Principle is more than a guideline; it's a foundation for creating effective and reliable object-oriented software. By understanding and applying LSP in Python programming, developers can build systems that are not only efficient but also scalable and easy to maintain. It’s a principle that underscores the importance of thoughtful class design, ensuring that subclasses truly represent an is-a relationship with their superclasses.

Remember, the power of LSP lies in its simplicity and its profound impact on the integrity and robustness of software design.

Tutorial: Deploying RabbitMQ with Docker Compose

Step 1: Install Docker and Docker Compose

Make sure Docker and Docker Compose are installed on your machine. You can download and install them from the official Docker site: Docker and Docker Compose.

Step 2: Create a Docker Compose File

Create a file named docker-compose.yml and paste the provided Docker Compose configuration.

version: '3'

services:

  rabbitmq:
    image: rabbitmq:3.12-management-alpine
    container_name: 'rabbitmq_api_client'
    volumes:
      - ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
    environment:
      - RABBITMQ_DEFAULT_USER=user
      - RABBITMQ_DEFAULT_PASS=password
      - RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq.conf
    ports:
      - 5672:5672
      - 15672:15672

Understanding the Docker Compose Configuration for RabbitMQ

The provided Docker Compose configuration creates a Docker container named rabbitmq_api_client using the Docker image rabbitmq:3.12-management-alpine. The container is configured to use port 5672 for AMQP connections and port 15672 for the RabbitMQ management interface. The container is also configured to use a custom RabbitMQ configuration file.

Some details about the configuration:

  • Volume: Mounts a volume, linking the local ./rabbitmq/rabbitmq.conf file to the /etc/rabbitmq/rabbitmq.conf file in the container. This allows copying the custom RabbitMQ configuration file into the container.
  • RABBITMQ_DEFAULT_USER: Sets the default username for RabbitMQ to connect to the RabbitMQ management interface.
  • RABBITMQ_DEFAULT_PASS: Sets the default password for RabbitMQ to connect to the RabbitMQ management interface.
  • RABBITMQ_CONFIG_FILE: Sets the path of the custom RabbitMQ configuration file within the container.

Step 3: Create a RabbitMQ Configuration File

Create a directory named rabbitmq in the same location as your docker-compose.yml file. Inside the rabbitmq directory, create a file named rabbitmq.conf. This file can be used to customize RabbitMQ settings.

Step 4: Configure RabbitMQ with a Custom Configuration File (Optional)

Open the rabbitmq.conf file on your machine and add custom configurations if needed. For example:

# rabbitmq.conf
vm_memory_high_watermark.absolute = 2GB

This sets RabbitMQ's high memory watermark limit to 2 GB. If you wish to add more configuration parameters, you can find the complete list of configuration parameters in the official RabbitMQ documentation: Configuration.

Step 5: Deploy RabbitMQ with Docker Compose

Open a terminal, navigate to the directory containing your docker-compose.yml file, and execute the following command:

docker-compose up

This command will download the RabbitMQ Docker image, create a Docker container named rabbitmq_api_client based on the provided configuration, and start the RabbitMQ server in the background.

Upon launch, you should see in the logs whether your configuration file has been recognized:

rabbitmq      |   Config file(s): /etc/rabbitmq/rabbitmq.conf
rabbitmq      |                   /etc/rabbitmq/conf.d/10-defaults.conf
rabbitmq      | 
rabbitmq      |   Starting broker...2023-11-29 07:36:10.216117+00:00 [info] <0.230.0> 
rabbitmq      |  node           : rabbit@b0b0b0b0b0b0
rabbitmq      |  home dir       : /var/lib/rabbitmq
rabbitmq      |  config file(s) : /etc/rabbitmq/rabbitmq.conf

Step 6: Access the RabbitMQ Management Interface

Open your web browser and go to http://localhost:15672/. Log in with the username user and password password as specified in the Docker Compose file.

Step 7: Connect to RabbitMQ via AMQP

You can now connect to RabbitMQ using the AMQP protocol on localhost:5672 with the credentials provided in the Docker Compose file.

Congratulations! You have successfully deployed RabbitMQ using Docker Compose. You can further customize the configuration by modifying the docker-compose.yml and rabbitmq.conf files according to your specific needs.

How to Perform an HTTP Request in MicroPython with a Pico W?

In this article, we will explore how to perform an HTTP request in MicroPython with a Pico W.

We will utilize the urequests module, which allows making HTTP requests in MicroPython.

Prerequisites before Getting Started with Pico W

To use the Pico W, you need to have the MicroPython firmware installed on your Pico W.

You can follow the tutorial below to install the MicroPython firmware on your Pico W: Install MicroPython on a Pico W.

To transfer files to the Pico W, you can use the software Thonny or PyCharm with the MicroPython plugin installed.

Connect Your Raspberry Pi Pico W to the Internet via Wi-Fi

To perform an HTTP request, you must connect your Pico W to the internet via Wi-Fi. We will use the network module from the MicroPython standard library to connect the Pico W to the internet.

import network
from time import sleep

ssid = SSID
password = PASSWORD

sta_if = network.WLAN(network.STA_IF)
sta_if.active(True)

sta_if.connect(ssid, password)
while not sta_if.isconnected():
    sleep(1)
print('Connection successful')
print(sta_if.ifconfig())

In this code, we start by importing the network module from the MicroPython standard library. Next, we set the Wi-Fi network name and password in the ssid and password variables. Finally, we create an instance of the WLAN class and connect to the Wi-Fi network.

The while loop is used to wait for the Wi-Fi network connection to be established. Once the connection is established, we display the IP address of the Pico W.

This loop is crucial because without it, the following code will be executed before the Wi-Fi network connection is established.

To learn more about the network module, you can refer to the official documentation: network — network configuration.

Installing the urequests Module on Pico W

# Check if the urequests module is already present
try:
    import urequests
except ImportError:
    # If the module is not present, install micropython-urequests via upip
    import upip
    upip.install('micropython-urequests')

    # Import urequests again after installation
    import urequests

In this code, we use a try-except structure to check if the urequests module is already present. If importing generates an error (ImportError), it means the module is not installed. In this case, we use the upip module to install micropython-urequests. Then, we import the urequests module again to use it in the rest of the code.

Making an HTTP Request in MicroPython with a Pico W

Here is an example code demonstrating how to make an HTTP request in MicroPython with a Pico W to the Rutilus Data platform. The goal is to send a value to a time series using an HTTP POST request.

The post function from the urequests module is used to perform the HTTP request. It requires the request URL, the data to be transmitted, and the request headers.

The data to be sent must be in JSON format, for which the ujson module from the MicroPython standard library is used. It transforms a Python dictionary into JSON format. The request headers specify that the data is in JSON format.

Don't forget to replace "token_authentication" with the token of your device, which you can find on the corresponding page of your device on the Rutilus Data platform.

import urequests
import ujson

# Replace "token_authentification" with the token of your device
data = {"value": 1.0, "token": "token_authentification"}
url = "https://rutilusdata.fr/api/timeseries/"
headers = {'content-type': 'application/json'}

# Make the HTTP POST request
response = urequests.post(url, data=ujson.dumps(data), headers=headers)

# Get the response in JSON format
json_response = response.json()

# Show the response
print(json_response)

To interpret the response of the request, we use the json function from the urequests module. This function converts the response of the request into a MicroPython dictionary.

The complete source code is available on GitHub.

Comparative Analysis of Message Queuing Solutions for IoT

Introduction

Message queuing solutions are tools that facilitate communication between applications. They are used in various domains, particularly in IoT, to enable communication between sensors and data processing applications.

These solutions enable asynchronous communication between applications, meaning the sending application does not wait for a response from the receiving application. This asynchronous communication helps decouple applications, preventing the sending application from being blocked.

Messages are sent to queues and processed by receiving applications in the order they were sent.

Message queuing solutions are powerful tools but can be challenging to grasp.

This document aims to present and compare the most commonly used message queuing solutions in IoT.

Basic Concepts of Message Queuing Solutions

The basic concept of MQTT solutions is as follows: the producer sends a message to the broker, which stores it in a queue. Subsequently, the consumer retrieves the message from the queue and processes it.

sequenceDiagram
    participant Producer
    participant Broker
    participant Consumer
    Producer->>Broker: Message
    Consumer->>Broker: Message

Les messages

Messages are data sent by applications to the message broker. They can be binary or structured in formats like JSON, XML, etc.

Les queues

Queues are waiting lines where messages are sent. Typically, queues operate on a First In First Out (FIFO) basis, meaning messages are processed in the order they were sent.

Le producer

The producer is the application responsible for sending messages to the broker. Multiple producers can exist for a single queue.

Le consumer

The consumer is the application that processes messages in the queue.

Solutions de message queuing

RabbitMQ

RabbitMQ is an open-source message broker written in Erlang. It ensures messages reach their destination and stores them in queues.

Mosquitto

Mosquitto is an open-source message broker written in C. Widely used in IoT, it is lightweight and consumes minimal resources.

Redis

Redis is an open-source database management system written in C. Data is stored directly in RAM, providing quick responses. Redis is not only a database but also a message broker.

Redis includes a message queuing module in Python RedisMQ and other languages.

Comparative Analysis of Time Series Databases for IoT

Time series databases are optimized for the storage and retrieval of temporal data. They uniquely store data with a timestamp, allowing queries based on this timestamp.

Use cases are diverse, including sensors, monitoring, weather, IoT, networks, and web applications.

Characteristics of Time Series Databases

Time series databases share common features that set them apart from other database types:

  • Timestamping: All data entries are timestamped and indexed with a timestamp.
  • Compression: Time series databases are designed to compress data. To learn more about the compression algorithms used, refer to this article.
  • Partitioning: Data entries are partitioned based on time or another database field.
  • Retention: Data is automatically deleted after a specified time period, limiting database size and reducing storage costs.

InfluxDB - The All-in-One Time Series Database

InfluxDB, written in Go, developed by InfluxData, offers an open-source version (clustering functionality not included) and a Cloud offering.

The integrated graphical interface of InfluxDB facilitates user management, data visualization, alert creation, and plugin configuration, enhancing the user experience for exploring and understanding temporal data.

InfluxDB provides data collection plugins for monitoring (docker, amazon, hardware, AMQP, GitHub, etc.), simplifying data recording in the database. Clients in various languages are available, allowing querying of InfluxDB's REST API.

A distinctive feature is the use of the "FLUX" query language, specifically designed for manipulating temporal data, providing dedicated syntax for time series queries.

Data Structure in InfluxDB

Data in InfluxDB is structured as follows:

  • Bucket: A bucket is a data container, comparable to a database in a relational database.
  • Measurement: A measurement (temperature, consumption, etc.) is a dimension of a time series, comparable to a table in a relational database.
  • Tag: A tag is a label that filters data, comparable to a column.
  • Field: A field is a numeric value, comparable to a column.
  • Timestamp: A timestamp is a temporal value, comparable to a column.

For more information on data structuring in InfluxDB, refer to the InfluxDB documentation.

MongoDB - The Versatile NoSQL Database

MongoDB, a NoSQL database written in C++, developed by MongoDB Inc., offers an open-source version and a Cloud offering.

MongoDB is a document-oriented and versatile database. By default, it is not configured to store temporal data, requiring some operations to enable the storage and querying of temporal data.

Data Structure in MongoDB

Data in MongoDB is structured as follows:

  • Database: A database is a data container, comparable to a database in a relational database.
  • Collection: A collection is a set of documents, comparable to a table in a relational database.
  • Document: A document is a JSON-formatted record, comparable to a row in a relational database. In a document, temporal data, tags, fields, and lists of values can be stored.

For more information on data structuring in MongoDB, refer to the MongoDB documentation.

TimeScaleDB - PostgreSQL-Based Database for Time Series

TimeScaleDB is an open-source time series database built on PostgreSQL, developed by Timescale Inc., which also offers a Cloud offering.

TimeScaleDB is an extension of PostgreSQL, enabling the storage and querying of temporal data using SQL. It is compatible with PostgreSQL clients.

Data Structure in TimeScaleDB

Data in TimeScaleDB is structured similarly to PostgreSQL, with tables referred to as hypertables.

The hypertable is a table partitioned based on time, allowing the creation of dimensions with tags. Storing the same data with the same timestamp is not allowed, preventing duplicates.

Choosing the Right Time Series Database

Each database management system has unique advantages and disadvantages. Here are some guidelines to help you choose the most suitable database for your use case.

  • TimeScaleDB: Leverages PostgreSQL's maturity, enabling the use of PostgreSQL clients to connect to the database and benefit from SQL. Frameworks compatible with PostgreSQL are also supported by TimeScaleDB. For example, Django, with its ORM, provides support for TimeScaleDB.

  • InfluxDB: Specialized in time series, easy to learn and use. Data is structured to record temporal information with mandatory "measurement" and "timestamp" fields. However, storage is exclusively dedicated to time series, and for other needs (project management, users, etc.), another database may be required. InfluxDB offers over 300 plugins to facilitate data collection, but using the "FLUX" query language is required to query the database.

  • MongoDB: A versatile NoSQL database that is easy to use. Specific database configuration is required for storing temporal data. MongoDB is suitable for applications not requiring massive data storage (less than 1000 points per second). If adding metadata to time series is necessary, MongoDB may be a suitable solution. However, querying JSON-formatted data may be more complex compared to a relational database.

How to Save RabbitMQ Messages in PostgreSQL with Python?

RabbitMQ, a powerful means of communication between applications and connected objects, facilitates the efficient transfer of messages, but it does not provide a built-in mechanism for long-term storage. To overcome this limitation, integration with a robust database such as PostgreSQL is essential.

For instance, when leveraging AMQP or MQTT protocols for data transmission from connected objects, such as monitoring room temperature or collecting energy consumption from a device, there is often a pressing need to retain this information for future analysis. RabbitMQ does not inherently provide this data persistence feature, but by intelligently synchronizing your message flow with PostgreSQL, you can create a comprehensive solution that enables not only real-time communication but also durable data storage.

In this tutorial, we will show you how to save RabbitMQ messages in PostgreSQL using Python.

Prerequisites for Saving RabbitMQ Messages in PostgreSQL

To follow this tutorial, you need the following:

  • a RabbitMQ server
  • a PostgreSQL server

Launching RabbitMQ Server and PostgreSQL Server

In our example, we will use Docker to launch RabbitMQ and PostgreSQL. Below is the docker-compose.yml file that defines the RabbitMQ and PostgreSQL services.

version: "3"

services:
  rabbitmq:
    image: rabbitmq:3.12-management-alpine
    container_name: rabbit_mq_to_db
    environment:
      - RABBITMQ_DEFAULT_USER=user
      - RABBITMQ_DEFAULT_PASS=password
      - RABBITMQ_DEFAULT_VHOST=default_vhost
    ports:
      - 5672:5672
      - 15672:15672

  postgres:
    image: postgres:16-alpine
    container_name: postgres_db
    hostname: postgres
    restart: always
    environment:
      - POSTGRES_DB=postgres_db
      - POSTGRES_USER=postgres_user
      - POSTGRES_PASSWORD=password
    volumes:
      - ./docker-entrypoint-initdb.d/:/docker-entrypoint-initdb.d
    ports:
      - "5432:5432"

Description of the docker-compose.yml file:

This Docker Compose file defines two services, RabbitMQ and PostgreSQL, along with their respective configurations to create and orchestrate Docker containers that interact with each other. Here is a detailed explanation of the content:

RabbitMQ Service:

  • rabbitmq: Service name.
  • image: Uses RabbitMQ version 3.12 image with the management plugin enabled, allowing access to the RabbitMQ admin interface at http://localhost:15672. Connect using credentials user and password.
  • container_name: Name of the container created for this service.
  • environment: Defines environment variables for RabbitMQ, including username, password, and virtual host (vhost).
  • ports: Maps ports 5672 and 15672 of the RabbitMQ container to the same ports on the host.

PostgreSQL Service:

  • postgres: Service name.
  • image: Uses PostgreSQL version 16 image with Alpine Linux.
  • container_name: Name of the container created for this service.
  • hostname: Hostname of the PostgreSQL container.
  • restart: Container's automatic restart policy (set to "always").
  • environment: Defines environment variables for PostgreSQL, including the database name, username, and password.
  • volumes: Mounts the local directory ./docker-entrypoint-initdb.d/ into the PostgreSQL container's database initialization directory, allowing execution of SQL initialization scripts.
  • ports: Maps port 5432 of the PostgreSQL container to the same port on the host.

To launch RabbitMQ and PostgreSQL, execute the following command:

docker-compose up

Create Python Scripts to Save RabbitMQ Messages in PostgreSQL

Now that RabbitMQ and PostgreSQL are launched, we will create the Python scripts. We will need two scripts:

  • A RabbitMQ publisher to send messages with Python
  • A RabbitMQ consumer to receive messages with Python and save them in PostgreSQL

We will also need the pika library, which is a Python implementation of the AMQP 0-9-1 protocol to connect to RabbitMQ.

Finally, we will need the psycopg2 library to connect to PostgreSQL and interact with the database.

RabbitMQ Publisher to Send Messages with Python

The publisher is a Python script that connects to RabbitMQ and sends messages to a queue.

To connect to RabbitMQ, we will use the pika library. To install it, execute the following command:

pip install pika

Following code is the publisher:

import random

import pika
from time import sleep

# Connection to RabbitMQ
url = 'amqp://user:password@localhost:5672/default_vhost'
params = pika.URLParameters(url)
connection = pika.BlockingConnection(params)
channel = connection.channel()

# Creation of the 'temperature' queue
channel.queue_declare('temperature')

# Creation of the 'temperature_routing_key' route that links the 'temperature' queue to the 'amq.direct' exchange
channel.queue_bind('temperature', 'amq.direct', 'temperature_routing_key')
while True:
    sleep(3)
    # Send a message to the 'temperature' queue
    channel.basic_publish('amq.direct', 'temperature_routing_key', body=str(random.uniform(0, 100)))

The RabbitMQ connection URL is amqp://user:password@localhost:5672/default_vhost where:

  • user is the username
  • password is the password
  • localhost is the RabbitMQ server's IP address
  • 5672 is the RabbitMQ port
  • default_vhost is the RabbitMQ vhost (virtual host)

RabbitMQ Consumer to Receive Messages with Python and Save them in PostgreSQL

The consumer is a Python script that connects to RabbitMQ, reads messages from a queue, and saves them in PostgreSQL.

Here is a simple example of a consumer that reads messages from the temperature queue and saves them in the temperature table in PostgreSQL:

import pika
import psycopg2

# Connexion to PostgreSQL database
connection_sql = psycopg2.connect(database="postgres_db", user="postgres_user", password="password", host="localhost", port="5432")
cursor = connection_sql.cursor()

# Definition of the callback function that will be called when a message is received in the 'temperature' queue
def callback(ch, method, properties, body):
    # Conversion of the message to string
    body = body.decode()
    # Insertion of the message in the 'temperature' table
    cursor.execute("INSERT INTO temperature (value) VALUES (%s)", (body,))
    connection_sql.commit()

# Connection to RabbitMQ
url = 'amqp://user:password@localhost:5672/default_vhost'
params = pika.URLParameters(url)
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.basic_consume('temperature', callback, auto_ack=True)
channel.start_consuming()
channel.close()
connection.close()

Complete Python Source Code

You can find the complete source code for this tutorial on Github.

It is slightly different from what is presented in this tutorial as it includes the file for creating the temperature table in PostgreSQL.

It uses classes to connect to PostgreSQL and for the RabbitMQ consumer.