Changelog

Follow new updates and improvements to Qovery.

February 12th, 2025

Hello Team,

Check out this week’s changelog for exciting updates and enhancements from our team!

New deployment history view

We’ve revamped the deployment history view for both environments and services to provide greater visibility into:

  • The entire deployment pipeline

  • Deployment times

  • Who triggered the deployment and how

This update makes it easier to track and manage deployments efficiently!

Coming soon: You’ll also be able to manage the deployment queue directly from this section. Stay tuned!

Customize the error pages of your application

By default, when your service is not reachable or encounters issues, generic error pages are returned to the end user (e.g., "404 Not Found" or "503 Service Unavailable"). These pages are managed by the NGINX Ingress controller, they are functional but lack branding or contextual information, which can be a poor user experience for end-users.

Thanks to our latest developments, you can now create custom error pages and deploy them directly with Qovery.

Check out our complete guide here

Preparing for the upgrade to Kubernetes 1.31

We have begun preparations for the upgrade to Kubernetes 1.31. In this forum post, you’ll find all the information you need, including:

• How the upgrade is managed

• The upgrade plan (starting with non-prod, then prod)

• What actions you need to take

Minor Changes:

  • Deployment history log breadcrumb: we have added a fallback mechanism on the breadcrumb showing the deployment history in case there are no deployments available.

  • Added info on debug pod for self-managed cluster: We have added in the dropdown a section describing how to connect remotely to your cluster via the Qovery remote bug.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

January 29th, 2025

Hello Team,

Take a look at this week’s changelog, which features some exciting updates and enhancements from our team!

Karpenter out of Beta with additional control feature

After a long journey, we’re excited to announce that the Karpenter feature is officially out of beta! Karpenter, the advanced EKS node autoscaler, is designed to optimize resource usage for clusters running on AWS. For more details, check out here.

Here’s what’s new:

  • Default for Non-Production Clusters: Karpenter is now the default node autoscaler for all non-production clusters.

  • Production Cluster Compatibility: You can enable Karpenter for new production clusters, and soon, you’ll also be able to migrate existing production clusters to Karpenter.

Additionally, we’ve introduced a couple of key features to help you control costs and manage your resources more effectively:

  • Consolidation Scheduling: You can now define when the consolidation process occurs for your node pools. Consolidation ensures pod allocation across nodes is optimized, minimizing fragmentation and reducing overall cluster costs.

  • Nodepool Limits: Set maximum CPU and memory usage for node pools, allowing you to control resource consumption and keep your cluster costs in check.

Karpenter nodepool configuration

(if you're interested, check out these two articles where we share some valuable lessons we learned while installing and configuring Karpenter)

Connect to your Kubernetes cluster via the remote debug pod

We’ve introduced a new CLI command that simplifies connecting to your Kubernetes cluster without requiring credentials. Note: This feature is restricted to admins, and all connections are logged in the audit logs for security and transparency.

Technically, this feature deploys a dedicated debug pod on your cluster, preloaded with useful tools like kubectl and k9s. It’s an invaluable resource when you need to debug or investigate issues directly from your local machine.

Identify services using deprecated Kubernetes API

At Qovery, part of our job is ensuring that when your cluster is upgraded to a new Kubernetes version, the applications deployed through our “Application,” “Database,” and “Job” services remain fully compatible with the latest version.

However, if you deploy your own applications or third-party services via a Helm chart, you are responsible for ensuring those charts are compatible with the new Kubernetes version.

To make this process easier, we’ve developed a new feature that:

  1. Recaps Deprecated API Usage: you’ll now see a summary in the cluster logs of any services using Kubernetes APIs that are slated for deprecation in the next version. While this is currently a warning message, we strongly recommend addressing it promptly by upgrading your Helm charts to a compatible version.

  2. Blocks Cluster Upgrades with Incompatibilities: if any incompatibilities are detected with the upcoming Kubernetes version, the cluster upgrade process will be halted until the issues are resolved.

This feature helps you proactively address potential problems and ensures a smoother upgrade process.

Find deprecated API usage before kubernetes upgrade

Applying Rate Limiting and Advanced Whitelist on your services

To help better protect your applications—especially if you don’t already have these safeguards provided by an external service like a CDN—we’ve introduced two powerful new features:

  • Rate limiting: define custom rate-limiting rules for the traffic your application receives. These rules can be general or tailored based on specific request attributes. For more details on configuring rate limiting, check out our guide here.

  • Advanced IP Whitelist: Create advanced whitelisting rules based on request attributes. For example, you can allow traffic from a specific IP address only if it includes a particular $Token value in the request header. Learn how to set up advanced whitelisting in our guide here

Minor Changes:

  • VPA charts upgrade: we have upgraded the VPA chart to the latest version so that we will ready to upgrade your cluster to the Kubernetes 1.31.

  • Removed QOVERY_DEPLOYMENT_ID variable: Following our post here, we have removed the deployment id environment variable from every service.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

January 15th, 2025

Hello Team,

Take a look at this week’s changelog, which features some exciting updates and enhancements from our team!

New service list: focus on what matters

Through discussions with our customers, we identified some common sources of confusion when navigating the service list page:

  • What’s the difference between deployment status and service status?

  • Why are both showing errors?

  • Which branch is being used for a given service?

Using this invaluable feedback, we have redesigned the service list with the following key improvements:

  • Enhanced Focus on Service Status: The most critical information—service status—now takes center stage in the interface.

  • Streamlined Deployment Status Display: Deployment status is now only shown when relevant, such as during an ongoing deployment or when a deployment fails. Once completed, a timestamp will be shown in the “Last Update” column.

  • Commit Details Relocated: Commit details are moved to a more intuitive location, improving clarity.

  • Branch Information Visibility: The branch used for the deployment is now clearly displayed, reducing any ambiguity.

Check out the video below for a closer look!

Cluster lock - protect your cluster from unwanted updates

We’ve introduced a new feature in the Qovery CLI that allows you to temporarily lock your cluster, preventing any updates—whether initiated by you or Qovery—during critical business periods.

Execute the following command to lock your cluster:

qovery cluster lock --cluster-id <your-cluster_id> --reason <reason> --ttl-in-days <days>

--ttl-in-days: Defines the duration of the lock (maximum of 5 days).

Qovery reserves the right to force unlock the cluster in the event of a critical bug fix release.

Here's the documentation.

Improved deployment log views with fallback screens

To enhance clarity in understanding deployment outcomes, we’ve added fallback screens to the deployment log views. These updates provide better insights into what happened during your environment deployment.

You’ll now see a default screen clearly indicating if an error occurred:

  • During the pre-check phase

  • During the deployment of another service

This feature ensures you can quickly identify and address issues in your environment deployment process.

Encryption at rest enabled for Redis (AWS)

We have enabled encryption at rest by default for managed Redis instances deployed via Qovery on AWS. This enhancement provides an added layer of security for your data.

Please note that this configuration applies only to new Redis instances. Unfortunately, AWS does not support activating this feature for existing Redis instances.

Minor Changes:

  • Configure build ephemeral storage: You can now configure the amount of ephemeral storage given to the builder machines via the advanced setting build.ephemeral_storage_in_gib.

  • Helm network configuration not cloned if service-specific: The network configuration of your helm chart won't be cloned if the pointed service name is strictly related to the service ID (like helm-z1234431-my-service-name). In this case, cloning the configuration won't work, and it was creating confusion.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

December 18th, 2024

Hello Team,

Take a look at this week’s changelog featuring some exciting updates and enhancements from our team!

New database versions: PostgreSQL 17, MongoDB 8 and MySQL 8.4

We’re excited to announce support for the latest versions of popular databases:

  • PostgreSQL 17

  • MongoDB 8

  • MySQL 8.4 (available in container mode only)

Additional Database versions

Log view - deployment action button and new deployment notification

We’ve improved the deployment log view to make it more efficient and user-friendly.

  1. Deployment Button: Trigger or cancel deployments directly from the log view. No need to switch back and forth between the service list page and the log interface—saving you time and clicks.

  2. New Deployment Notification: If a new deployment starts while you’re reviewing logs, you’ll receive a notification within the interface. This feature ensures you’re always looking at the latest deployment logs without confusion.

Kubernetes upgrade to 1.30 completed

The upgrade to Kubernetes 1.30 for all production clusters has been finalized! 🎉

Additionally, we’ve introduced a new guardrail to our upgrade process. Clusters will no longer upgrade if any currently deployed service relies on a Kubernetes API slated for deprecation in the next version.

What’s Next?

Our team plans to start upgrading clusters to Kubernetes 1.31 in January. We’ll share the exact timeline and details closer to the release.

Minor Changes:

  • Pod Sidebar in Logs: A few minor bug fixes and performance improvements have been delivered.

  • Buildpack Support Removed: Following the decommissioning of Buildpack support, we’ve completely removed its codebase and cleaned up the interface for a sleeker experience.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

December 4th, 2024

Hello Team,

Have a look at our changelog, which has some cool features delivered by our team.

Qovery at AWS re:Invent 2024

This week, the Qovery team is thrilled to be part of AWS re:Invent 2024! Come meet us at booth #319 and discover how we’re redefining DevOps automation for developers and organizations alike.

Qovery at RE:Invent

We love connecting with our customers face-to-face, exchanging insights, and engaging with the broader AWS community. Stop by for a chat—we’d love to hear about your challenges and show you how Qovery can simplify your cloud workflows.

Cluster configuration diff in deployment logs

To improve visibility and transparency, we’ve introduced cluster diff insights in the deployment logs of Qovery Managed Clusters. This feature provides a clear view of changes to your cluster caused by configuration updates or Qovery backend updates.

Cluster diff in logs

In the Cluster Log section, you’ll now see:

  • Terraform Diff: Highlights differences between the infrastructure running in your cloud account (e.g., EKS, security groups, VPC) and the Qovery configuration.

  • Helm Diff: Displays the differences between currently running Qovery applications (e.g., agents, cert-manager) and the latest Qovery configuration.

These diffs can arise from:

  1. Changes you’ve made to your cluster configuration, such as updating node types or modifying Nginx settings.

  2. Updates from Qovery’s infrastructure engine or Helm chart releases.

Get application and pod status directly in the log interface

We’ve made troubleshooting faster and more intuitive by integrating application and pod statuses directly into the deployment and application log interface.

On the right-hand side of the interface, you can now:

  • See the current application status.

  • Access errors raised by pods and quickly access their application logs

When an issue occurs, the pod indicator will turn red, and detailed error logs will be accessible directly within the interface.

Upcoming Improvements:

  • Version Comparison: During deployment, you’ll soon be able to compare the old and new versions of your application to identify issues in newly deployed pods.

  • Enhanced Troubleshooting: We’re working on providing more detailed, actionable insights to help you resolve pod errors quickly and confidently.

Fetch images from private registries

You can now easily browse and select images stored in private container registries directly from the Qovery interface.

For instance, if your private Elastic Container Registry (ECR) contains multiple images, Qovery can:

  • Fetch the complete list of available images.

  • Allow you to search by typing at least three characters.

  • Simplify image selection during deployments.

This update ensures a smoother, more efficient workflow for teams working with private registries.

Minor Changes:

  • Added new service icons: You will find some new cool icons that you can select for your service like S3, Lambda etc..

  • Pipeline error indicator: if the a deployment within the environment has failed, there will be a red indicator next to the pipeline button

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

November 20th, 2024

Hello Team,

our team is back on track and we have developed some nice changes to the platform over the past sprint. Take a look at what we have managed to deliver:

Filter the Instance Types to be used with Karpenter

Karpenter has been a part of our platform for over six months, and now we’re taking it to the next level.

You can now restrict the types of instances Karpenter uses to deploy your applications. This feature is particularly useful if you want to:

  • Reduce the number of nodes by using larger instances.

  • Limit deployments to specific EC2 instance families.

Check out the quick demo below from our CEO to see it in action!

Dev Clusters Upgraded to Kubernetes 1.30

As mentioned in this forum thread, we’ve upgraded your non-production Managed clusters to Kubernetes version 1.30.

Once you have validated that everything works on your non-production cluster, you can manually trigger the upgrade for your production cluster using the "Upgrade to K8s 1.30" option. This allows you to ensure everything runs smoothly with the new version before applying it to your production cluster.

Triggering cluster upgrade

If you don’t initiate the upgrade yourself, we’ll proceed with it according to the schedule shared in the forum post.

For users of our Self-Managed solution, please update your Qovery charts version by running the "qovery cluster install" command, and then upgrade your Kubernetes cluster version.

ALB as default and available for prod clusters

The ALB controller feature (available only on AWS) was released a few months ago and was initially limited to non-production clusters. We’ve now updated the configuration with the following changes:

  • Default Activation: ALB is now activated by default for all new customers.

  • Production Cluster Support: You can now enable the ALB feature on your production clusters.

For more details, check out our official communication: ALB Controller Feature.

Email notifications on cluster update failures

To enhance responsiveness, we’ve introduced additional email notifications for cluster-related issues. These notifications will be sent to the owner and admins of your organization in the following scenarios:

  • A cluster update fails.

  • Cluster credentials are no longer valid.

These updates ensure you can promptly address and resolve any issues affecting your cluster setup..

Minor Changes:

  • Use private subnet ids for existing VPC setup with EKS (AWS): You can now select private subnet IDs when configuring a cluster over an existing VPC. This is necessary to enable Karpenter and run it over fargate.

  • Removed default security groups (AWS): we have removed the default security groups that were too permissive (allowing 0.0.0.0/0)

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

November 6th, 2024

Hello Team,

It has been a quiet sprint in terms of feature delivery, as our team was enjoying our autumn retreat (check out here with some pictures), Nevertheless, take a look at what we have managed to deliver:

#Product Clusters Upgraded to Kubernetes 1.29 and new version 1.30 upcoming

As mentioned in this forum thread, we’ve upgraded your production Managed clusters to Kubernetes version 1.29.

Now, it’s time to move forward to Kubernetes version 1.30. We've shared a new post on our forum outlining the upgrade schedule.

You can manually trigger the upgrade for your non-production cluster using the "Upgrade to K8s 1.30" option. This allows you to ensure everything runs smoothly with the new version before applying it to your production cluster.

Triggering cluster upgrade

If you don’t initiate the upgrade yourself, we’ll proceed with it according to the schedule shared in the forum post.

For users of our Self-Managed solution, please update your Qovery charts version by running the "qovery cluster install" command, and then upgrade your Kubernetes cluster version.

#New Advanced Settings for ALB management

Following the release of the ALB controller feature (only on AWS), we're excited to introduce new advanced settings. These settings allow you to customize how HTTP requests are managed through Qovery.

We have added the following advanced settings:

  • use-forwarded-headers: Passes incoming X-Forwarded-For header upstream, see documentation.

  • compute-full-forwarded-for: Append the remote address to the X-Forwarded-For header instead of replacing it, see documentation.

You can find the complete documentation for the advanced settings in this section.

#EC2 (K3s) Offers Decommissioning

After nearly two years since its release, we have decided to decommission the EC2 (K3s) cluster feature.

However, following the rollout, we observed limited adoption. Most usage came from individual developers looking for a simple way to deploy side projects with a very high churn rate. Despite this, our broader target audience continued to prefer multi-node Kubernetes clusters, as the EC2/K3s setup lacked node autoscaling—an essential feature for many.

Additionally, maintaining compatibility for the EC2/K3s use case added complexity to our already intricate environment. Each code change required extra consideration for this feature, which consumed significant resources.

Paradox of the low infra cost

#Minor Changes:

  • Improved cluster creation flow: cloud hosting option cards have been improved with better visibility on the self-managed offer.

  • Improved notification for clusters in error: in case of cluster update error, you will have a blinking red error dot on the left nav bar (we are working on an email notification system as well).

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

October 23rd, 2024

Hello Team,

This week, our team is enjoying an autumn retreat in Lille, France, spending quality time together after releasing some exciting features over the past two weeks. Check them out:

Non-production Clusters Upgraded to Kubernetes 1.29

As mentioned in this forum thread, we’ve upgraded your non-production Managed clusters to Kubernetes version 1.29.

Now, it’s time to upgrade your production clusters. The upgrade will be triggered on October 28th. If this timeline doesn’t suit you, feel free to manually trigger the upgrade beforehand using the "Upgrade K8s to 1.29" button.

For those using our Self-Managed solution, please update the version of your Qovery charts by running the qovery cluster install command, and then upgrade your Kubernetes cluster version.

New deployment log view

We’ve revamped the deployment log view to improve clarity and visibility:

  • Deployment Pipeline: You now have a global view of each stage in the deployment pipeline and their status.

  • Separation of Deployment vs. Running Logs: We’ve replaced the tab system with two distinct views to display deployment logs and application logs for each service.

  • Deployment Time: You can now clearly see the deployment time for each service, along with the time spent on each step (build, deploy, etc.).

  • Pre-Checks Step: This has always been part of the process, but is now more clearly visible in the interface.

Check out the video below!

Enable Whitelist on Kubernetes Public Endpoints

We've introduced new advanced settings that allow you to restrict access to the public endpoints of your Kubernetes clusters on AWS and GCP. This is especially useful for compliance concerns, as it ensures that only a specific set of whitelisted IPs can access the public endpoints. You can manage this feature via the advanced settings qovery.static_ip_mode and k8s.api.allowed_public_access_cidrs.

New AWS Regions Available

We’ve expanded our list of supported AWS regions for Managed clusters, now including:

  • Asia Pacific (Hyderabad) ap-south-2

  • Europe (Spain) eu-south-2

  • UAE me-central-1

Minor Changes:

  • Renamed advanced settings database.xxx.deny_public_access: we've renamed this advanced settings into database.xxx.deny_any_access.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

October 9th, 2024

Hello Team,

We have been working hard over the past weeks to deliver you some exciting news on our product, check this out:

Deployment logs - Better Visibility of The Deployment Steps

Every application deployed via Qovery goes through several steps, such as git clone, building, image push, etc.

To enhance visibility and provide better insights into the deployment pipeline, we are completely redesigning the deployment log interface. As the first step, we’ve separated the deployment process into two distinct phases: build and deployment. Each phase now has its own status and execution time displayed.

In this way, you'll be able to easily identify where the issue originates and if either of the two steps is taking longer than expected.

Reduced Build Time by Sharing Images Across Applications

Applications built by Qovery are pushed to your container registry in your cloud account (or elsewhere for Self-Managed clusters), and you can find all the details here. Over the past weeks, we've been working on speeding up deployment times by enabling image reuse across applications within the same cluster, skipping the image build and push steps when possible.

Until last week, every application was built in complete isolation, making it impossible for application B to reuse an existing image built from application A, even if they referenced the same Git repository, Dockerfile, commit Id and root path. This required a new build for application A. With the latest release, this behaviour has changed, allowing application B to immediately use the image previously built for application A and thus reducing the overall deployment time.

This update has been globally released, and all Qovery customers can now benefit from faster deployments. We'll post an article soon to explain these changes and the improvements they've brought to the community.

Karpenter 1.0

We've upgraded Karpenter to its latest major version (1.0) to ensure we deliver the newest features (more details are available on the Karpenter blog).

We're also working on enhancing its flexibility by allowing you to choose the types of instances to run on your cluster. Check out this post from Julien for additional information.

Remember, you can already activate Karpenter on your non-production clusters and take advantage of features like spot instances!

New Cluster Creation View

We've revamped the cluster creation view to better highlight our multi-cloud capabilities and guide you more effectively through the configuration process.

The new interface is similar to the one used for creating a new service, allowing you to easily select your hosting solution and choose between a Qovery-managed or self-managed cluster.

Minor Changes:

  • Fetching images from private container registries: we've added the ability to fetch both the image name and version when referencing a private container registry, making it easier to manage your container images.

  • Variables interpolation helper: we’ve introduced the variable interpolation helper in the environment variable setup, making it effortless to reuse existing environment variables during configuration.

  • Kubernetes 1.29 upgrade: all preparatory work for the Kubernetes 1.29 upgrade is complete. We'll be sending out communication soon with the upgrade timeline.

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀

September 24th, 2024

Hello Team,

We have been working hard over the past weeks to deliver you some exciting news on our product, check this out:

Simplified process to publicly expose services deployed with Helm

Qovery makes it easy to expose services deployed with your Helm chart to the internet. The setup has been streamlined—you only need to select the service and port from a dropdown!

Previously, you had to manually enter both the service name and port. Now, the Qovery control plane automatically retrieves the list of services deployed by your Helm chart along with their exposed ports.

Manual Kubernetes version upgrade

We are currently completing all the preparatory work before initiating the upgrade of your Managed clusters to Kubernetes versions 1.29 and 1.30. We'll soon share the timeline for these upgrades on our website and forum.

As part of this preparation, and to give you more control over the upgrade process, we’ve developed a new feature that allows you to manually trigger the version upgrade. Once the Qovery team officially supports a new Kubernetes version (in this case, 1.29 or 1.30), a new option will appear in your cluster's action list, enabling you to initiate the upgrade at your convenience.

If you prefer not to trigger the upgrade manually, your cluster will be automatically upgraded according to a schedule that we’ll share with you soon.

New application log view

We've updated the application log view to bring some key improvements:

  • Easily filter by pod, container, or version.

  • Expand each log line to view more details.

  • Fresh new UI that's more readable.

Check out the video below!

We're working on delivering a completely new error troubleshooting experience. Stay tuned for it in the upcoming releases!

Buildpack support decommission

We have decided to stop supporting Buildpack and will focus on providing the best experience for applications built with Dockerfiles. You can no longer create new applications using Buildpack, and we will soon decommission the remaining codebase.

However, this doesn't mean we're leaving our existing customers who use Buildpack behind. We are working on a new solution to generate Dockerfiles using generative AI, as shared in our blog post here.

Minor Changes:

  • Removed advanced setting deployment.custom_domain_check_enabled: following the updates on the custom domain configuration section, we have removed this advanced setting.

  • Added production flag in Qovery Terraform provider: You can now define if a cluster is for production directly from your TF manifest.

  • Added linked service info in environment variable dropdown: in the new dropdown to select the environment variables for interpolation, we have added the information about the linked service (only for BUILT_IN variables)

For the latest news and upcoming features, remember to check out changelog.qovery.com.

As always, we appreciate your feedback and support.

Happy Deploying!

The Qovery Team 🚀