Frequently Asked Questions

As a customer, how can I procure Swayam AI MLOps?

To know more about Swayam AI MLOps, you can email us at sales@minfytech.com with a brief description of your requirement. We'll respond to your query and set up a follow up session to demonstrate Swayam AI MLOps capabilities. Our experts will be available to address additional technical queries on the discussion.

If you're already a Minfy customer, please reach out to your designated Minfy contact.

How can I deploy Swayam AI MLOps?

Swayam AI MLOps is distributed as a self-contained software package, along with instructions for installation and configuration. If you choose to deploy it yourself, simply follow the install guide. Minfy can help you with a one-time initial deployment at no cost.

If you need long term support for advanced deployment or customization, Minfy offers outcome driven consulting services as well.

What is the TCO of Swayam AI MLOps?

The TCO of Swayam AI MLOps includes not just the initial setup and deployment costs but also ongoing operational expenses. Costs can vary widely based on scale, complexity, and other factors. Here's a breakdown of the key components to consider. Depending on the pricing model a customer chooses, some or all of these cost components may be covered in the contract.

Initial Costs

      Infrastructure Setup: The infrastructure Swayam AI MLOps runs on (servers, storage, networking) in a cloud or on-premises environment incurs costs. For cloud deployment, these costs depend on the chosen provider and the resources allocated.

      Development and Customization: Tailoring Swayam AI MLOps to fit specific workflows, integrating existing systems, and developing additional features or connectors can require additional effort and expertise.

      Training and Onboarding: Staff may need training to effectively use and manage the new tools, which can include formal training sessions, materials creation, and time spent learning.

Operational Costs

      Maintenance and Upgrades: Regular software updates, patches, and security fixes need to be applied, requiring dedicated staff time or external consultants.

      Support and Troubleshooting: While open-source projects may have community support, enterprises often need more reliable and immediate assistance, which might involve paid support contracts or hiring experts.

      Infrastructure Costs: Ongoing expenses related to running the infrastructure, such as compute instances, storage, and network usage, including data transfer costs in and out of the cloud environment.

      Compliance and Security: Ensuring the platform complies with relevant industry regulations and standards can involve additional tools, audits, and personnel.

      Data Management: Costs associated with storing, managing, and transferring large datasets, which are central to ML workloads.

      Scaling: As usage grows, so will the costs for additional resources and possibly more complex management and governance structures.

Hidden Costs

      Integration: Efforts required to integrate the Swayam AI MLOps platform with existing systems (e.g., data sources, CI/CD pipelines, monitoring tools).

      Vendor Lock-in Risk: While being cloud-neutral reduces risk, there may still be dependencies or integrations that favour certain environments or tools, potentially complicating future migrations.

Estimating TCO

The overall TCO can vary significantly. A small to medium-sized implementation might incur costs from tens of thousands to hundreds of thousands of dollars annually when considering infrastructure, personnel, and indirect costs. Larger deployments or those with higher complexity and stringent compliance requirements can see costs escalate into the millions.

What is the preliminary comparative assessment with other alternatives of Swayam AI MLOps?

Comparing Swayam AI MLOps with MLrun, Dify.ai, and ZenML involves looking at various aspects such as their core features, target audience, deployment options, and integrations. Here's a tabular comparison to highlight some of the key differences and similarities.

Feature / Aspect

Swayam AI MLOps

MLrun

Dify.ai

ZenML

Core Focus

Cloud-neutral, open source MLOps platform for small & medium teams

MLOps framework to manage and automate machine learning pipelines

AI as a service platform offering easy integration of ML models into products

An extensible MLOps framework to create reproducible ML pipelines

Target Audience

Team of data scientists and ML engineers collaborating on ML and GenAI projects

Data scientists and ML engineers looking for an end-to-end MLOps solution

Businesses and developers needing quick AI capabilities without deep ML expertise

ML teams needing a scalable and reproducible workflow

Deployment

On-premises, cloud

On-premises, cloud, or hybrid environments

Cloud-based service

On-premises, cloud, or hybrid environments

Open Source

Yes

Yes

No (proprietary)

Yes

Primary Language

Python

Python

- (Service-based)

Python

Pipeline Definition

Web interface, Python code

Python code, YAML

Web interface, API

Python code

Data Versioning

Supported through integration (e.g., with DVC)

Supported through integration (e.g., with DVC)

Not explicitly mentioned; focuses more on model deployment

Built-in support and integrations with tools like DVC

Experiment Tracking

Yes, powered by MLflow

Yes

Yes, as part of the service

Yes, through integrations with MLflow, Weights & Biases, etc.

Model Serving

Yes, powered by Seldon Core MLServer

Yes, with Nuclio and integration with serving tools like Seldon Core

Yes, core feature

Yes, through integrations and native components

Scalability

Can scale vertically and through decoupled architecture

Designed for scale, supports Kubernetes

Scalable as a managed cloud service

Designed for scalability, supports Kubernetes

Community and Support

Open-source community, additional support by Minfy for implementation

Open-source community, commercial support available

Provided by Dify.ai as a managed service

Open-source community, commercial support through third parties

What level of readiness and resources do I need within my organisation to start with Swayam AI MLOps?

There is no minimum required organisational readiness to get started with Swayam AI MLOps. Taking into account the size, skills and expertise of customer's existing AIML team, Minfy can tailor the offering and services for implementation of Swayam AI MLOps with specific business outcomes.

How can I make a business / financial case to get approval from my CFO?

Please email us at sales@minfytech.com with a brief description of your requirement. We'll respond to your query and set up a follow up session to assess your business objectives. Our experts will then build a business case for Swayam AI MLOps to achieve the identified business outcomes.

If you're already a Minfy customer, please reach out to your designated Minfy contact.

I am an ML Engineer in my company. How can I position Swayam AI MLOps to our AI leader and explain benefits of deploying for our entire ML team?

A cloud-neutral, open-source MLOps platform offers numerous benefits for AI developers, ML engineers, and data scientists by providing a flexible, cost-effective, and scalable approach to machine learning operations. Here are some of the key advantages.

Flexibility and Portability

Cloud Neutrality: Avoids vendor lock-in, offering the freedom to deploy on any cloud provider (AWS, Azure, GCP, etc.) or on-premises environments. This ensures that AI projects can be moved or scaled across different infrastructures based on cost, performance, or regulatory requirements.

Technology Agnosticism: Supports a wide range of tools and frameworks, enabling teams to choose the best tools for their specific needs without being constrained by platform-specific limitations.

Cost Efficiency

Open-Source Licensing: Eliminates the need for expensive proprietary software licenses, significantly reducing the upfront and ongoing costs associated with developing and deploying AI models.

Optimized Resource Utilization: Enables more efficient use of computing resources through containerization and orchestration (e.g., Kubernetes), reducing operational costs.

Enhanced Collaboration

Standardization: Promotes the use of standardized processes and tools across the entire ML lifecycle, enhancing collaboration among data scientists, engineers, and operational teams.

Community Support: Access to a broad community of users and developers can lead to shared innovations, problem-solving, and enhancements to the platform.

Innovation and Speed

Prototyping and Deployment: Streamlines the process from model development to deployment, enabling faster iteration and innovation.

Continuous Integration/Continuous Deployment (CI/CD): Supports CI/CD practices for machine learning, allowing teams to automate testing and deployment of models, thus accelerating the release cycle.

Comprehensive MLOps Features

End-to-End Management: From data preparation to model training, versioning, deployment, and monitoring, providing a cohesive workflow for ML projects.

Experiment Tracking and Versioning: Facilitates tracking of experiments, model versioning, and rollback, enhancing reproducibility and governance.

Swayam AI MLOps platform empowers teams with the tools and flexibility needed to innovate quickly, collaborate effectively, and deploy scalable and cost-effective AI solutions.

What is Swayam AI MLOps?

Swayam AI MLOps is a cloud neutral, open source platform to address all stages of the machine learning lifecycle.

It offers a web based interface to various components such as code repository, interactive computing, feature store, ML experiment tracking, model serving, metrics and more.

Who does it benefit and how?

Designed with small to medium-sized teams of data scientists and ML engineers in mind, Swayam AI MLOps fosters an environment of collaboration and shared knowledge. The platform provides a suite of tools that allow team members to work together more efficiently, regardless of their location.

Swayam AI MLOps addresses every phase of the ML lifecycle, from data preparation and model building to deployment and monitoring. This holistic approach empowers teams to manage their projects through a single, unified platform, enhancing productivity and reducing the complexity typically associated with ML projects. With Swayam AI MLOps, data scientists can experiment with different models using a range of frameworks and libraries. ML engineers can use this platform to deploy models into production, and monitor performance to ensure optimal outcomes.

How does it save development time?

Swayam AI MLOps provides a collaborative web based platform for all stages of ML lifecycle. Some of the key features that contribute to better productivity are:

Component Reusability: Enables the reuse of components across different projects, such as feature store for model training and inference, reducing duplication of effort.

CI/CD for ML: Integrates Continuous Integration/Continuous Deployment pipelines for ML models, automating the testing, validation, and deployment processes, ensuring that models can be rapidly iterated and deployed.

Centralized Experiment Tracking: Tracks experiments in a centralized system, making it easier to compare models, parameters, and results, thereby quickly identifying the best performing models.

Parallel Experimentation: Supports running multiple experiments in parallel, leveraging cloud or on-premises resources efficiently to reduce the time required for model exploration and testing.

Collaborative Workspaces: Facilitates collaboration among team members by providing shared workspaces, documentation, and communication tools, speeding up the problem-solving process.

Is it a one click deployment and go live platform?

Swayam AI MLOps consists of several components, each of which can be installed easily using scripts.

To integrate with your existing organisational IT tools and processes, installation and configuration of additional software may be required. For example, you may need a setup that allows your applications to authenticate users via GitLab, with GitLab itself authenticating against your enterprise Active Directory.

I don't want to commit long term subscription to Swayam AI MLOps. Can I develop and export the environment out of this platform?

Yes, Swayam AI MLOps is completely built on open source components. You can freely interoperate with other tools and platforms, as long as you maintain general compatibility requirements. For example, if you train a ML model using Swayam AI MLOps on Linux 64-bit Intel/AMD architecture, you can move the model artifacts outside Swayam AI MLOps and deploy on any other environment, as long as the underlying architecture is same, i.e. Linux 64-bit Intel/AMD. If you try to deploy the model on MacOS, it'll fail, but not because of any limitation of Swayam AI MLOps.

What kind of skill set is required to use this?

Swayam AI MLOps is primarily for data scientists and ML engineers. It's expected that users will have foundational knowledge of data science, DevOps, ML lifecycle, and hands on experience with Python, Hugging Face, transformers. Familiarity with Docker and Kubernetes is a plus, and these will come in handy for tasks related to deployment of ML models in production.

Can this be used by multiple developers with dedicated environment for them?

Yes, Swayam AI MLOps fosters an environment of collaboration and shared knowledge. The platform provides a suite of tools that allow team members to work together more efficiently, regardless of their location.

The platform is designed for concurrent use by a team. Each user gets an isolated environment for development and unit test. Some system-wide resources are shared by all users, e.g. a common feature store, a common instance of inference server that supports multiple models.

How does Swayam AI MLOps deal with data security?

Data security in Swayam AI MLOps is primarily based on the security features available in modern Linux variants and NFS version 4.1.

Firewall (iptables/nftables): Tools for configuring network packet filtering rules to control incoming and outgoing traffic based on predefined security policies.

SSH Keys for Authentication: Allows secure remote login from one computer to another, replacing password-based logins with cryptographic keys, enhancing security against brute-force attacks.

Encrypted Filesystems (e.g., LUKS): Allows for disk encryption to protect data at rest, securing data from unauthorized access should the physical security of the storage device be compromised.

Access Control Lists (ACLs): NFS v4.1 supports more granular access control lists similar to those in Windows, allowing for detailed specification of permissions for different users and groups.

For cloud based deployment of Swayam AI MLOps, additional security controls can be implemented, e.g. identity & access management, virtual private networks, certificate management etc.

How much will the backend infrastructure cost?

The cost of infrastructure on which Swayam AI MLOps runs depends on several factors such as:

Scale of deployment

What components are enabled and configured

Data egress costs (for cloud based deployment)

Tiered billing, discounts, savings plans (for cloud based deployment)

The most significant infrastructure components are Linux servers, a shared file system, NAT gateways, and the network backbone.

During preparation of the business case for Swayam AI MLOps, Minfy creates a cost estimate of the backend infrastructure specific to a customer's requirements. If you'd like a budgetary estimate of infrastructure cost, please reach us on email at sales@minfytech.com or call your designated Minfy point of contact.

Top