Introducing Aliquis®
What is Aliquis?
Aliquis is an industrial-grade software framework that presents a rich set of features for low-code/no-code developing and deploying real-world machine vision applications for massive complex scene understanding.
Aliquis is able to dramatically speed-up complex workflows and to deploy in servers or at the edge machine learning algorithms for manufacturing, inspection, healthcare, monitoring and predictive maintenance, among others.
By combining three types of building blocks (patches, stages, and pipelines) through a human-readable syntax, Aliquis empowers users with no programming skills, but who know what to do, to achieve industry-level applications in the computer vision domain.
Data Scientists will enjoy a well-documented declarative language, a full set of tutorials, and a multi-platform multi-OS runtime environment.
ML Engineers will exploit the companion AI-assisted collaborative labeling tool LAIRA that supports Aliquis for embedding machine vision annotations and predictions into data via Ximage.
System Integrators can deploy Aliquis in several manners:
- Command shell through a Conda-based virtual environment.
- ISO for bare metal or virtual machine installations.
- Docker containers.
Linux and Windows 10/11 (via WSL) are natively supported.
A M2M interface for workflow management is exposed via standard JSON-RPC API.
The core engine of Aliquis is built in Python/C/C++ on OpenCV and TensorFlow/Keras although other CV/ML backends are pluggable.
CTO and CEO can rely on available escrow agreements for business continuity of projects based on Aliquis as well as IP assets related to Aliquis.
What is Aliquis capable of?
Aliquis is able to understand a real scene and make decisions based on what it has seen. Hence, Aliquis can be applied to any vision-based decision-making activity. The following operations can be performed with Aliquis:
- 2D, 2.5D, and 3D real-time image analysis.
- Image processing.
- Image classification.
- Semantic image segmentation.
- Object detection.
Developing of machine learning applications with Aliquis
In recent years, we are witnessing a novel wave of interest in the field of machine learning (ML). The launch of hardware platforms designed for massively parallel computation, the huge availability of data, and the algorithmic improvements are the three main factors that let deep learning models (i.e. artificial neural networks) achieve impressive, unprecedented results in machine vision tasks (e.g. image recognition and semantic segmentation). Still, one particular aspect that characterizes the current ML era is the development of brand-new deep learning frameworks, such as TensorFlow and PyTorch, which have become very popular and widespread because they allow to seamlessly build and deploy ML-powered applications without demanding significant programming expertise. Overall, these tools paved the way for the adoption of ML solutions inside the industrial context. Besides, the arrival of Keras (a high-level API for TensorFlow) has further accelerated the path from prototyping to production: in fact, Keras is an abstraction layer that provides the building blocks needed to create and train deep learning models, hiding all the operations executed by TensorFlow.
Our framework Aliquis has been designed following the same principle but going way beyond: we want to give users, regardless of their skills, a simple yet powerful and flexible tool to reach cutting-edge performances in machine vision applications without necessarily worrying about the underlying complexity; users are thus free to focus on prototyping and deployment rather than technical details. Aliquis allows to create datasets, define data transformation pipelines, train neural networks, and run inference without having to write a single line of code by means of an intuitive declarative language plus a few bash commands. In this sense, Aliquis can be seen as an additional layer stacked on top of Keras, both logically and operationally:
- Logically means that Aliquis hides the calls to the Keras (and OpenCV) API: for instance, a stage using a convolutional
neural network (CNN) instantiates a model that is defined by Keras' functions; likewise when we train models running the host
aliquispl_keras
, training routines are actually entrusted to Keras'fit
method. We might say that, from Aliquis' point of view, Keras is a mere computational back end. - Operationally means that knowing Keras is not required to work with Aliquis since all the types of operations needed to build and deploy a model or process data can be defined and customized within our framework. In other terms, the tools provided by Aliquis are enough to develop end-to-end solutions.
Nevertheless, Aliquis is far more than just an interface that simplifies the use of Keras. To give you a sense of its potential, we think that the following three key aspects are worth pointing out:
- Aliquis foundations lie in the handling of a scene as a whole, both from the spatial and temporal standpoints: it is specialized in the processing of sequential frames, namely timelines of 2D images. Originally, Aliquis has been developed to operate on a scene using jointly computer vision (OpenCV) and machine learning (TensorFlow/Keras) algorithms; later on, the framework has been expanded to include wrapper modules that manage the calls to TensorFlow/Keras and OpenCV functions.
- Thanks to the Aliquis declarative language, the framework's part tailored for data science purposes is decoupled from the SDK (as the documentation's structure highlights).
- Developing the runtime libraries in C/C++ and Python allows binding together development code and deployment code according to the write once, run anywhere (WORA) paradigm.
Application's life cycle from development to production
In general, compliance with established ML best practices depends on the project's current phase. When creating a Proof Of Concept (POC), the priority is to determine if a system is worth taking to production: for instance, we want to understand if the model can match human performances in the task of interest or if its use can generate a concrete added value. In this phase, we try to evaluate as fast as possible if a project is technically feasible without worrying too much about (temporarily) setting aside aspects such as robustness or scalability. We should avoid over-investing in infrastructure, favoring prototyping speed instead. On the other hand, when a POC is successful and we decide to move to the production phase, then the development efforts shift towards building and deploying a solution that achieves solid and consistent performances in the real world; therefore, our system will feature all the fundamental principles of replicability, reliability, monitoring, documentation, etc. Remarkably, Aliquis is able to support users throughout both these two major phases: in fact, the declarative language enables quick experimentation and model testing while the high-level and low-level APIs allow designing a production-ready software infrastructure.
Why choose Aliquis?
In summary, the most prominent characteristics and strengths of Aliquis are:
- Complete toolset to develop end-to-end, ML-based solutions.
- Inexperienced users can rapidly prototype and deploy industrial-level applications in the field of machine vision using Aliquis' high-level syntax.
- Developers can add their custom stages and hosts through Aliquis' SDK.
- Spans from the proof of concept creation to the production infrastructure supply.
- Create datasets of labeled images to train models using LAIRA, a labeling software integrated with Aliquis that provides a simple GUI to draw annotations on images.
- A runtime system, extendible in C/C++, CUDA, and Python, is available for several architectures (x86, x64, ARM, HPC, GoogleCloud).
- Native support for Linux and Windows (through WSL) as well as all the other operating systems by means of virtual machines.
A high-level comparison between Aliquis and other popular frameworks is provided below.
Aliquis | TensorFlow/Keras | PyTorch | OpenCV | |
---|---|---|---|---|
Developers can introduce new features through an SDK | ||||
Supports Windows (through WSL), Linux, and Docker | ||||
Provides runtimes for several architectures (x86, x64, ARM, HPC, GoogleCloud) | ||||
Complete toolset to develop end-to-end AI-based solutions | ||||
Seamlessly combine deep learning algorithms and computer vision processing | ||||
Prototype and deploy applications with low-code/no-code paradigm | ||||
Integrated annotating and training tools for building AI models | ||||
ISO-based installation for industrial production environments |