Changelog

Follow new updates and improvements to Qovery.

June 4th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team! 🚀

Access your Kubernetes cluster directly from the Qovery console

Admins can now connect directly to their Kubernetes cluster from the Qovery console. Just head to the cluster page, hit the “Connect” button, and you’ll be dropped into a pod with kubectl, k9s, and other tools ready to go.

The objective? We want to simplify the life of cluster admins. No need to download kubeconfigs, install tools, or worry about credentials. It’s now easier and faster to check what’s going on in your cluster, right from the browser.

Additional cluster info available in the Terraform provider and environment variables

We’ve added more Kubernetes cluster metadata both in our Terraform provider and as environment variables:

  • The cloud provider cluster name (env var QOVERY_KUBERNETES_CLUSTER_NAME)

  • The cluster OIDC issuer

  • The cluster ARN

  • The cluster VPC ID (env var QOVERY_KUBERNETES_CLUSTER_VPC_ID)

The objective? Make it easier for anyone to use the cluster information at different levels:

  • Simplify IAC configuration with our Terraform provider: managing the cluster setup without that information was a pain and required running custom scripts within the Terraform manifest

  • Simplify the deployment of managed cloud provider services: easily retrieve the VPC information so that the resource can be deployed within the same VPC of the cluster without having to create a separate one.

Azure (AKS) managed solution - Looking for Alpha and future Beta testers

We’re launching a fully Qovery-managed AKS (Azure Kubernetes Service) experience. This brings all Qovery capabilities to Azure without the need to manage the cluster yourself via our self-managed solution.

If you need to run your workload on Azure in a fully managed, scalable and secure way, get in touch with us! (via Slack or the support widget on our website)

We are ready to launch our alpha phase before having a public beta, and thus we are looking for customers willing to collaborate with us up to the official release.

Customer support - Now fully powered by Pylon

We’ve completed the migration to Pylon, our new support system. All console requests now go through the same platform we use for Slack and email.

The objective?

Providing the best customer support has been at the core of our company since day 1. Making it scale with the increasing number of customers has been a challenging task, and we had to adapt our internal tools to make it happen. Thanks to the Pylon integration, we can now provide the best support experience across different channels (Slack, email or our console)

Minor Changes:

  • CLI - added "list commands" action: it allows you to discover the available commands on our CLI.

  • Improved toast message in case of variable conflict: when adding a variable which already exists, you now get a message providing you with the name and the location of the conflicting variable.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

May 21st, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team! 🚀

Say hello to the Qovery DevOps Copilot

We’re thrilled to introduce the Qovery DevOps Copilot: a smart, AI-powered Agentic assistant designed to supercharge your DevOps workflows.

Currently in Alpha, the Copilot is built to automate repetitive tasks, provide contextual insights, and guide decision-making across your infrastructure where Qovery runs.

🎯 This is an incredible step forward in our vision: to offer a self-service platform that provides an exceptional developer experience, powerful automation, and reduced tooling complexity across the entire application and infrastructure lifecycle... without any deep infrastructure knowledge!

What your DevOps Copilot can do for you today

Imagine you're a developer, it's Friday afternoon, and you must ensure all development environments are properly managed before the weekend to control costs. Instead of manually checking each environment, you can tell the Copilot:

"Show me all development environments that have been inactive for more than 24 hours, and schedule them to stop at 6 PM today. Also, send a Slack notification to the team about this action."

The Copilot will handle this entire workflow for you, saving time and ensuring nothing is overlooked.

By default, the DevOps Copilot is in read-only mode, but here's what your DevOps Copilot can help you accomplish when unlocked:

  • Scheduling: "Deploy serviceapito staging at 23:00 UTC."

  • Reporting: "Generate a weekly usage and deployment summary for project Atlas." or “How many deployments we did on our organization on the last 30 days - can you also tell me the deployment time at 90, 95, 99 percentile for each service?”

  • Environment control: "Stop all development environments that have been idle for 4 hours."

  • Explain Qovery: "How does the networking layer isolate production and staging?"

  • Configuration help: "What does the**CONNECTION_TIMEOUT**field in advanced settings do?"

  • Optimization: "Propose Dockerfile tweaks to shrink imagefrontend." or “How can I optimize my deployment time?”

Check out a few examples:

  • Generate a report on Qovery usage

  • Trigger and queue deployments

ℹ️ Since it is in Alpha, a few improvements and functionalities still need to be implemented. Contact us to get early access and be part of this Alpha phase!

Choose a dedicated control plane for your Kapsule clusters

Until now, we’ve provided access to the Mutualized control plane for Kapsule clusters, a free option that’s been sufficient for most use cases.

But as our customer base grows, we’ve hit some of its limits, including etcd bottlenecks and control plane response time issues.

You can now upgrade to a Dedicated control plane directly from your Qovery console. This gives you:

  • More stability and performance

  • Improved scalability for demanding workloads

  • Full control, without leaving the Qovery UI

New AWS region enabled: Mexico (Central)!

AWS has been steadily expanding its global infrastructure, and we’re keeping pace to give you more flexibility based on your application’s traffic needs.

We’re excited to announce that Mexico (Central) is now fully supported by Qovery! 🇲🇽

This new region, announced by AWS earlier this year (official announcement here), is now ready for your deployments, with all Qovery-managed infrastructure running smoothly.

This addition was made following a customer request, and it’s a reminder:

If you need to deploy to a specific region that isn’t supported yet, just reach out to our support team. We’re listening!

More regions. More flexibility. Same great experience.

Minor Changes:

  • Increased PDB maxUnavailable value to 20%: to allow Karpenter to scale down more easily and avoid safelock in a cluster with a considerable number of nodes.

  • Replaced Intercom chat with Pylon: we have migrated our support from Intercom to Pylon and we have finally changed the widget chat component used in our front-end application and use Pylon instead.

  • Add priorityclass to VPA: to ensure VPA runs smoothly in every condition, a priority class has been assigned to it

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

May 7th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team! 🚀

🔐 Use Assume Role (STS) for safer AWS connections

Until now, the only way to connect Qovery to your AWS account was through static credentials (Access Key / Secret Access Key). While it works, it’s not the most secure approach.

That’s why we’ve added support for Assume Role via AWS STS - bringing a much more secure and flexible way to authenticate:

  • 🔄 Short-lived credentials: STS credentials automatically expire and refresh, reducing the risk of leaks

  • 🧱 Granular access: IAM roles allow you to define precisely what Qovery can do on your AWS account

  • 🔒 Less exposure: Static credentials are long-lived and easier to compromise

  • 💡 Static credentials are still supported, but we strongly recommend switching to Assume Role for better security.

To make things simple, we provide:

  • A ready-to-use CloudFormation stack to create the role in a few clicks

  • In-app documentation at the exact moment you need it, while setting up your AWS credentials

It's faster, safer, and just a better way to connect.

See the documentation here.

Select your Docker build stage

Until now, there was no way to specify which stage Qovery should build when using a multi-stage Dockerfile

You can now explicitly select the target stage to build and deploy:

  • Choose the right stage for your app

  • Avoids the need to split or rewrite your Dockerfile

This option is available directly in your service’s Build and deploy settings.

See the documentation here.

Know when your environment is out of date

Changing a project or environment-level variable? Until now, it wasn’t obvious that you needed to redeploy the whole environment, not just individual services.

We’ve added a small but important visual cue:

  • ⚠️ The Deploy Environment button now turns yellow when something is out of sync

  • 🔁 Helps you avoid issues caused by partial deployments

It’s a subtle change, but it helps ensure your environment stays consistent, especially when using global variables.

Know when your environment is out of date

Minor Changes:

  • Cancel a queued deployment from deployment history: You can now remove a deployment in the queue, directly from the deployment history view.

  • Enable compression in NGINX by default: Compression is now enabled by default on all new clusters using the built-in NGINX layer, helping reduce payload sizes and improve performance. If you’re using an existing cluster, you can activate it manually by enabling the advanced cluster setting: nginx.controller.enable_compression. ⚠️ Note: If you already use another layer (like a CDN or custom proxy) that handles compression, we recommend not enabling this to avoid double compression issues.

  • Return to infrastructure logs in dry-run mode: When updating a cluster in dry-run, you’re now redirected back to the infrastructure logs.

  • Force lowercase in image names: When deploying a container image in Qovery, we now automatically convert the image name to lowercase to comply with container registry naming rules and prevent unexpected errors.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

April 23rd, 2025

Hello Team,

We have been really active over the past sprint on improving the user experience and adding a few core functionalities to our product.

Improved command bar (Ctrl + k) with service search and favourite service management

The original command bar was a great start—letting you quickly search and run actions based on your current context. But we knew it could do more.

Finding specific services across multiple projects was painful—especially if you constantly switch between 3 or 4 services. So we’ve levelled it up with some powerful new features:

  • 🔍 Service search – Just type the name of a service, and jump straight to it. Quick links to the related environment and project are also available.

  • ⭐ Favourite services – Pin your most-used services to a favourites list—always visible in the command bar.

  • 🕓 Recently visited – Instantly access the last 3 services you opened, right from the bar.

Check out our video below 👇

What's coming next here

We will add the possibility to search for environments and clusters!

Debug improvements - get your pod Kubernetes events on the console

Troubleshooting application issues on Kubernetes can be tricky—you need a clear, detailed view of what’s happening at the pod level.

To help with that, we’ve added Kubernetes event logs directly into your application view. You can now see the latest events that occurred on your app’s pods, making it easier to understand why something might not be working as expected.

What's coming next here

We’re working on:

  • A real-time resource view for your applications

  • A new observability tool that gives you visibility over past hours and days of activity

Stay tuned—more clarity and control are on the way!

Test your cluster changes before applying them - Dry-run

Want to see what changes will be made to your infrastructure—before actually applying them?

You can now run a Dry-Run to preview the exact delta between your current setup and what’s defined in your configuration. This gives you peace of mind by clearly showing what’s about to happen.

The dry-run output is split into two key steps:

  • Terraform Plan – See the proposed changes to your cloud infrastructure (resources, networking, storage, etc.)

  • Helm Diff – Review changes to your core cluster components (e.g., Cert-Manager, Qovery-managed apps, etc.)

Perfect for double-checking before big updates 🚀

Inject custom configuration for CoreDNS

Some applications rely on specific DNS rules to function properly. To support these advanced use cases, you can now inject custom CoreDNS configurations directly into your cluster.

This is done via the new advanced cluster setting: dns.coredns.extra_config

example:

example.com:53 { errors cache 30 forward . 8.8.8.8 8.8.4.4 }

Minor Changes:

  • Qovery production cluster upgraded to 1.31: We have upgraded our production cluster during the maintenance that happened on the 14th of April.

  • Scaleway coredns setup reviewed: we have modified the default configuration applied on coreDNS and managed it with a separate snippet.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

April 9th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team! 🚀

New cluster list page with cluster "Running Status" indicator

At Qovery, we aim to make infrastructure deployment on your cloud account as seamless as possible—while following best practices. However, one area we’ve been working to improve is giving you better visibility into the health of your infrastructure.

We’re excited to introduce the Cluster “Running Status”, the first step toward surfacing key health insights.

While a cluster might show as “deployed,” it doesn’t always mean everything is running smoothly. That’s why we’ve added a new “Running” status to give you a clear signal of the cluster’s actual health.

  • New Status Indicator: The cluster list page now shows whether your cluster isreallyrunning or if there are issues under the hood.

  • Dedicated Deployment Status: You’ll now find the deployment status just below the cluster name, and it’s clickable—taking you straight to the deployment logs.

Check out our video below 👇

What's coming next

We’re not stopping here. Here’s a glimpse of what’s in the works:

  • Cluster status visibility throughout the product: Surface cluster health in related views like environments and services

  • Cluster overview and live metrics page: Basic metrics like node status, node types, and a centralized health dashboard—similar to what we provide for applications. Roadmap idea here

  • And .. 🥁 .. A full monitoring feature showing performance trends and key cluster events over time. Roadmap idea here

Clone an environment across projects

You can now clone an entire environment to a different project!

This new feature lets you copy the full environment configuration from one project to another—saving you time and avoiding the need to re-create everything manually.

Previously, this was only available at theservicelevel—but now, we’ve extended it to work at the environment level too 🎉.

Perfect for setting up staging environments, replicating setups across teams, or testing changes in isolated projects.

Give it a try and let us know what you think!

Clone environment to a different project

Wrap-up Demo day Q1-2025

Demo Days are back! 🎉 We’ve just wrapped up our first session of the year, covering everything we delivered during Q1 2025.

👉 Check out our blog post here to find:

  • A recap of all the features and improvements shipped this quarter

  • The full Demo Day video so you can catch up on everything at your own pace

Thanks to everyone who joined live—and if you missed it, now’s your chance to dive in!

Demo Day - Q1 2025

Minor Changes:

  • Customized credentials field name for container registry: credentials name was too generic and confusing so we have decided to customize them based on the selected container registry type.

  • Hide skipped services in deployment pipeline overview: when displaying the entire deployment pipeline, we are hiding by default the skipped services (the one that were not deployed during the selected execution).

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

March 27th, 2025

Check out this week’s changelog for exciting updates and enhancements from our team! 🚀

Managing a critical security vulnerability in ingress-nginx

On March 24, 2025, the Kubernetes Security Team disclosed several critical vulnerabilities in ingress-nginx, including CVE-2025-1974 (rated 9.8 CVSS). This vulnerability could allow unauthorized attackers to take full control of Kubernetes clusters.

⚡ Our Immediate Response

We acted swiftly to protect your infrastructure:

9:00 AM → Patched ingress-nginx with the latest security fixes.

10:00 AM → Verified the patch in our test environments.

11:00 AM → Rolled out the fix across all managed clusters:

--> 12:20 PM → Non-production clusters updated.

--> 2:20 PM → Production clusters updated.

March 25, 2025 → Full remediation completed.

🔍 What You Need to Do

  • If you manage your own clusters (self-managed cluster), update ingress-nginx to v1.12.1/v1.11.5 or later ASAP.

  • Review your cluster logs for any unusual activity in the past few days.

  • Check out the official Kubernetes security advisory for more details.

We have created a dedicated post here.

We take security seriously and will continue monitoring for any further risks. If you have questions, reach out to us! 🚀

Demo days!

Demo days are back! This is the best way for us to showcase what we have recently released on our product.

For this demo day, our CEO Romaric will do a live "no blabla" demo to introduce you to the latest features like: Karpenter, new log view, debug pods, etc..

👉 Register yourself here

Demo Day - Q1 2025


Maintenance events in the audit logs

We are regularly updating your cluster with the latest Qovery version, and to ensure you clearly see when an update has been triggered, we have introduced a new audit event called "Maintenance".

To see any event happening on your cluster, you can:

  • Open your cluster settings

  • select the "See audit logs" view from the dropdown menu (or go directly in the audit log section and filter the content from there)

Audit log cluster

Minor Changes:

  • Fix version in deployment history: we have fixed the deployment column in the deployment history page. It now correctly shows the version in case you are deploying container images.

  • Id displayed for repository/registry/token: when opening a container registry, a helm repository or a git token, you can now get the internal ID assigned by Qovery to that object. This is helpful whenever you need to use that object within the Qovery Terraform Provider.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

February 26th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team!

Enable Karpenter on existing production clusters

We’ve enabled the option to activate Karpenter on existing production clusters.

Important Requirement

To enable Karpenter, your cluster must have the "Static IP / NAT Gateway" feature enabled. Here’s why:

  • Karpenter runs on a Fargate node, requiring a private subnet and a NAT gateway to function properly.

  • Without this feature, only one NAT gateway will be deployed in a single zone.

  • If that zone experiences an issue, your cluster won’t be able to run Karpenter, preventing it from scaling your production environment up or down.

Future Improvements

We’re working on a way to enable Karpenter on existing production clusters without requiring a NAT gateway, but this won’t be available until next year.

If you need Karpenter immediately, you’ll need to migrate your applications to a new cluster with the NAT gateway feature enabled.

Activate S3 audit logging

In line with SOC2 compliance recommendations, we’ve introduced a new feature that allows you to enable S3 audit logging on your cluster.

How to Enable It

You can activate this feature easily by:

This enhancement helps improve security and compliance effortlessly.

Preparing for the upgrade to Kubernetes 1.31

As outlined in our forum post, we’ve now moved into the first phase of the upgrade plan:

  • Every new cluster created via Qovery now runs Kubernetes 1.31 by default.

  • You can manually upgrade your existing cluster or wait for the scheduled upgrade (March 3 → Non-production clusters, March 10 → Production clusters)

Stay tuned for further updates, and refer to the forum post for more details!

Minor Changes:

  • Build target selection available in the API: if you have a multi-stage docker file, you can now select the target stage. This feature is only available via the API and it will be soon available in the UI as well.

  • Return info in case of env var conflict: when adding a new environment variable and a conflict is detected, we return now the name of the service/env/project where the environment variable already exists.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

February 12th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team!

New deployment history view

We’ve revamped the deployment history view for both environments and services to provide greater visibility into:

  • The entire deployment pipeline

  • Deployment times

  • Who triggered the deployment and how

This update makes it easier to track and manage deployments efficiently!

Coming soon: You’ll also be able to manage the deployment queue directly from this section. Stay tuned!

Customize the error pages of your application

By default, when your service is not reachable or encounters issues, generic error pages are returned to the end user (e.g., "404 Not Found" or "503 Service Unavailable"). These pages are managed by the NGINX Ingress controller, they are functional but lack branding or contextual information, which can be a poor user experience for end-users.

Thanks to our latest developments, you can now create custom error pages and deploy them directly with Qovery.

Check out our complete guide here

Preparing for the upgrade to Kubernetes 1.31

We have begun preparations for the upgrade to Kubernetes 1.31. In this forum post, you’ll find all the information you need, including:

• How the upgrade is managed

• The upgrade plan (starting with non-prod, then prod)

• What actions you need to take

Minor Changes:

  • Deployment history log breadcrumb: we have added a fallback mechanism on the breadcrumb showing the deployment history in case there are no deployments available.

  • Added info on debug pod for self-managed cluster: We have added in the dropdown a section describing how to connect remotely to your cluster via the Qovery remote bug.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

January 29th, 2025

Hello Team,

Take a look at this week’s changelog, which features some exciting updates and enhancements from our team!

Karpenter out of Beta with additional control feature

After a long journey, we’re excited to announce that the Karpenter feature is officially out of beta! Karpenter, the advanced EKS node autoscaler, is designed to optimize resource usage for clusters running on AWS. For more details, check out here.

Here’s what’s new:

  • Default for Non-Production Clusters: Karpenter is now the default node autoscaler for all non-production clusters.

  • Production Cluster Compatibility: You can enable Karpenter for new production clusters, and soon, you’ll also be able to migrate existing production clusters to Karpenter.

Additionally, we’ve introduced a couple of key features to help you control costs and manage your resources more effectively:

  • Consolidation Scheduling: You can now define when the consolidation process occurs for your node pools. Consolidation ensures pod allocation across nodes is optimized, minimizing fragmentation and reducing overall cluster costs.

  • Nodepool Limits: Set maximum CPU and memory usage for node pools, allowing you to control resource consumption and keep your cluster costs in check.

Karpenter nodepool configuration

(if you're interested, check out these two articles where we share some valuable lessons we learned while installing and configuring Karpenter)

Connect to your Kubernetes cluster via the remote debug pod

We’ve introduced a new CLI command that simplifies connecting to your Kubernetes cluster without requiring credentials. Note: This feature is restricted to admins, and all connections are logged in the audit logs for security and transparency.

Technically, this feature deploys a dedicated debug pod on your cluster, preloaded with useful tools like kubectl and k9s. It’s an invaluable resource when you need to debug or investigate issues directly from your local machine.

Identify services using deprecated Kubernetes API

At Qovery, part of our job is ensuring that when your cluster is upgraded to a new Kubernetes version, the applications deployed through our “Application,” “Database,” and “Job” services remain fully compatible with the latest version.

However, if you deploy your own applications or third-party services via a Helm chart, you are responsible for ensuring those charts are compatible with the new Kubernetes version.

To make this process easier, we’ve developed a new feature that:

  1. Recaps Deprecated API Usage: you’ll now see a summary in the cluster logs of any services using Kubernetes APIs that are slated for deprecation in the next version. While this is currently a warning message, we strongly recommend addressing it promptly by upgrading your Helm charts to a compatible version.

  2. Blocks Cluster Upgrades with Incompatibilities: if any incompatibilities are detected with the upcoming Kubernetes version, the cluster upgrade process will be halted until the issues are resolved.

This feature helps you proactively address potential problems and ensures a smoother upgrade process.

Find deprecated API usage before kubernetes upgrade

Applying Rate Limiting and Advanced Whitelist on your services

To help better protect your applications—especially if you don’t already have these safeguards provided by an external service like a CDN—we’ve introduced two powerful new features:

  • Rate limiting: define custom rate-limiting rules for the traffic your application receives. These rules can be general or tailored based on specific request attributes. For more details on configuring rate limiting, check out our guide here.

  • Advanced IP Whitelist: Create advanced whitelisting rules based on request attributes. For example, you can allow traffic from a specific IP address only if it includes a particular $Token value in the request header. Learn how to set up advanced whitelisting in our guide here

Minor Changes:

  • VPA charts upgrade: we have upgraded the VPA chart to the latest version so that we will ready to upgrade your cluster to the Kubernetes 1.31.

  • Removed QOVERY_DEPLOYMENT_ID variable: Following our post here, we have removed the deployment id environment variable from every service.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

January 15th, 2025

Hello Team,

Take a look at this week’s changelog, which features some exciting updates and enhancements from our team!

New service list: focus on what matters

Through discussions with our customers, we identified some common sources of confusion when navigating the service list page:

  • What’s the difference between deployment status and service status?

  • Why are both showing errors?

  • Which branch is being used for a given service?

Using this invaluable feedback, we have redesigned the service list with the following key improvements:

  • Enhanced Focus on Service Status: The most critical information—service status—now takes center stage in the interface.

  • Streamlined Deployment Status Display: Deployment status is now only shown when relevant, such as during an ongoing deployment or when a deployment fails. Once completed, a timestamp will be shown in the “Last Update” column.

  • Commit Details Relocated: Commit details are moved to a more intuitive location, improving clarity.

  • Branch Information Visibility: The branch used for the deployment is now clearly displayed, reducing any ambiguity.

Check out the video below for a closer look!

Cluster lock - protect your cluster from unwanted updates

We’ve introduced a new feature in the Qovery CLI that allows you to temporarily lock your cluster, preventing any updates—whether initiated by you or Qovery—during critical business periods.

Execute the following command to lock your cluster:

qovery cluster lock --cluster-id <your-cluster_id> --reason <reason> --ttl-in-days <days>

--ttl-in-days: Defines the duration of the lock (maximum of 5 days).

Qovery reserves the right to force unlock the cluster in the event of a critical bug fix release.

Here's the documentation.

Improved deployment log views with fallback screens

To enhance clarity in understanding deployment outcomes, we’ve added fallback screens to the deployment log views. These updates provide better insights into what happened during your environment deployment.

You’ll now see a default screen clearly indicating if an error occurred:

  • During the pre-check phase

  • During the deployment of another service

This feature ensures you can quickly identify and address issues in your environment deployment process.

Encryption at rest enabled for Redis (AWS)

We have enabled encryption at rest by default for managed Redis instances deployed via Qovery on AWS. This enhancement provides an added layer of security for your data.

Please note that this configuration applies only to new Redis instances. Unfortunately, AWS does not support activating this feature for existing Redis instances.

Minor Changes:

  • Configure build ephemeral storage: You can now configure the amount of ephemeral storage given to the builder machines via the advanced setting build.ephemeral_storage_in_gib.

  • Helm network configuration not cloned if service-specific: The network configuration of your helm chart won't be cloned if the pointed service name is strictly related to the service ID (like helm-z1234431-my-service-name). In this case, cloning the configuration won't work, and it was creating confusion.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀