Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
ITCare is cegedim.cloud cloud management platform.
It incorporates a web administration interface and an API that provide a 360° view of your Cloud resources hosted and managed by cegedim.cloud.
Designed as a unified web service, it governs the following key points:
Cloud resource management: deploy and administer your resources.
Supervision: monitor the health and performance of your applications, be notified in case of incidents.
Support: contact our support teams for any queries or incidents.
Governance: view security and obsolescence reports, manage maintenance slots.
Integration: integrate your business processes with your cloud via the ITCare API.
You can access ITCare from any page of this website using the ITCare link in the header.
The Get started with ITCare page explains in detail how to access ITCare with information about authentication and permissions.
The page Authentication gathers all the information necessary for the discovery and the good use of the ITCare API.
If you are a customer, you can reach our Service Desk through this direct phone line for any support request: +33 (0)1 49 09 22 22
For any information request or contact, please use the contact form of our public website:
A Data Center uses three main categories of equipment:
Compute equipment (e.g., ESX servers based on X86 processors, IBM servers based on Power processors, etc.), which host the various instances.
Storage equipment (e.g., data storage arrays, object storage arrays, backup and archiving storage arrays).
Network equipment (e.g., network switches, firewalls, BigIP, etc.), which enable internal and external flow exchanges between services and instances.
Note: These three categories themselves consist of subcategories that allow for a more detailed calculation of CO2 emissions per user instance. For simplicity, these subcategories will not be included in the calculation method outlined below.
The energy consumption of each piece of equipment is collected every minute (instantaneous power in watts) and stored in a database (MIMIR key-value database).
Additionally, we collect the following elements to calculate the associated CO2 emission:
The PUE (Power Usage Effectiveness), calculated daily for the data centers we operate and monthly for colocation data centers.
The CO2 emission factors (in kgCO2e/kWh), which are updated annually based on data from ADEME and suppliers.
The CO2 emission of a piece of equipment is calculated on a daily basis:
Convert the instantaneous consumption into kWh/day from the data collected in the key-value database.
Apply the PUE of the data center:
If a piece of equipment consumes 1000 kWh/day and the PUE of the data center is 1.3, the actual consumption of the equipment will be 1300 kWh.
Convert the kWh/day into CO2 (kg) using the CO2 emission factor from the supplier:
If 80% of the energy used by the data center comes from a supplier who emits 0.008 kg of CO2 per kWh, and the remaining 20% comes from a second supplier who emits 0.01 kg of CO2 per kWh, the equipment’s emission will be: 1300 * 0.008 * (80 / 100) + 1300 * 0.01 * (20 / 100), which equals 10.92 kg of CO2 per day.
A Global Service is the logical grouping of resources (primarily Instances) that use resources from different categories of physical equipment (Compute, Storage, and Network). The carbon footprint of a Global Service will therefore be the sum of the CO2 emissions of its resources.
To help you understand and master every aspect of the ITCare platform, we offer a set of interactive demos specific to the features available in our Cloud Management Platform.
Various actions are possible to manage resources on our ITCare platform. We present them to you below!
Based on the selected resources in the filter window, dynamic filters will now be available to more efficiently display what you care about.
Your notifications are configurable and customizable through subscriptions. We show you how below!
The first essential step: create a subscription according to your personalized criteria.
Now that you know how to create a subscription, let's see how to manage it.
Subscriptions take advantage of delivery groups. We will show you how to manage them.
When your notifications are configured, it's essential to know how to monitor them.
cegedim.cloud provides a dedicated section in ITCare for its carbon footprint.
This section allows users to identify the environmental impact of applications in detail, displaying both the CO2 emissions of Global Services and those of associated instances.
The carbon footprint calculation is an integral part of the Enercare project, based on the energy consumption distribution of services. It takes into account IT equipment (servers, storage bays, networks, etc.) as well as all components necessary for their proper operation (air conditioning, generators, uninterruptible power supplies, etc.). These components are associated with the data center energy performance indicator calculated using Power Usage Effectiveness (PUE).
The scope includes both Cegedim’s proprietary data centers and those in colocation.
cegedim.cloud has developed internal tools and a methodology to allocate service consumption for shared instances. As part of a continuous improvement approach, this methodology will evolve with the reliability and accuracy of data collected.
Direct emissions under Scope 1 from data centers: fossil fuel combustion from generators, and fugitive emissions from refrigerants in cooling systems.
Indirect emissions under Scope 2: electricity consumption of equipment, including the PUE.
Indirect emissions under Scope 3: upstream emissions linked to fuels and electricity production, goods and services purchased for service operations (considering their production to end-of-life cycle), and employee travel.
The data presented in Enercare corresponds to cegedim.cloud’s activities, independent of those of other subsidiaries or the Cegedim group. As the carbon footprint is conducted and audited annually, we cannot correlate calculation coefficients to real-time service usage.
The Calculation Rules section below explains the collection of energy consumption data from physical equipment and the method used to calculate the carbon footprint for services and instances.
Welcome to the public documentation of cegedim.cloud, a trusted partner for private cloud hosting!
The calculation of the carbon footprint for our cloud services adheres to the rules of the . We include the following elements in the Enercare functionality:
To learn more about cegedim.cloud’s initiatives to reduce its environmental impact, you can visit the on our website.
Some methods return paginated results. The formatting of a paginated result is always:
The page
and size
values can be added to the query parameters of the original query in order to get the next or previous page.
You make a GET request to a paginated endpoint and to get the first page and 50 items https://api.cegedim.cloud/foo/bar?page=1&size=50
Example:
Each resource belongs to one of these 3 levels of hierarchy : Category, Type, Family. An overview of the categorization is described as follows :
Application server
Tomcat
Wildfly
Container
Kubernetes
Instance
Linux
CentOS
Debian
Oracle
RHEL
Ubuntu
Windows
Unix
Load balancer
Load Balancer
Managed database
MariaDB
OpenSearch
PostgreSQL
Redis
SQL Server
Message broker
Apache Kafka
RabbitMQ
Storage
GlusterFS
When getting a resource an attribut named path
is available in the output to inform about which category-type-family to navigate in order to get details about the resource.
The API is resource centric : the main entry point of the API could be /compute/resources
. It can be used to explore basic informations and navigate to the proper category and get the details.
JSON does not natively support the Date/Time format. All parameters tagged as Date by the API are therefore strings in ISO8601 format.
Z corresponds to the time zone: +0200 for example.
For GET requests, do not forget to URL-Encode these parameters.
Some methods allow testing of API calls without actually triggering the action in ITCare. However, the validation is still done. To activate the Dry Run mode, simply add a custom header to your HTTP requests:
Once the server processes your request, the same custom header will be included in the response.
Some methods are asynchronous and require a delay after their invocation. This applies to time-consuming transactions such as resource administration or reporting. Methods that operate asynchronously will respond:
an HTTP return code 202
a body containing a tracking ID for the current asynchronous operation
Here is an example of code in Python that explains the asynchronous operation:
The status of the actions can be :
IN_PROGRESS
SUCCESS
ERROR
The ITCare REST API complies with the OpenAPI standard specifications. The online documentation is available at this address:
You can explore the categorized API specification and use the Test it feature in order to execute and test the endpoints.
The ITCare API uses the OAuth 2.0 protocol for authentication and authorization. It supports the usual OAuth 2.0 scenarios such as those used for web servers and client applications.
This means that each API request must contain an "Authorization" header embedding an access token previously obtained through credentials.
To query the ITCare API, an API account is required in order to obtain the mandatory access token. To obtain this API account, a request must be submitted to the cegedim.cloud support teams by providing the following information:
The target organization
A simple description of the target usage of the API
In general, we can use the base64 command to encode a string. Using command-line tools in Linux for exemple :
If the access_token request is allowed and valid, here is a sample response:
When the token expires, it is possible to :
Request a new access_token
Refresh the token by querying the endpoint /token
Discover the latest updates of cegedim.cloud! Continuous improvements, new products, new features and evolutions are referenced here.
A new security dashboard provides real-time visibility into vulnerabilities, Bot Defense protection, Vault instances, and resource obsolescence. It enables precise tracking of resolved vulnerabilities and blocked attacks. The "Vulnerabilities" and "Bot Defense" pages have been enhanced for optimized management.
We have enhanced the accuracy of calculations by considering only the support periods defined in the indicator. Visibility has been improved with the display of availability and unavailability periods by month or day. The reasons for interruptions are now accessible, and maintenance periods are no longer counted as unavailability, ensuring more reliable indicators.
TLS encryption activation is now possible on request after the provisioning of the PostgreSQL PaaS. Please file a request ticket in ITCare for this feature.
We improved the backup policy selection and we are now allowing the selection of a "replicated" policy even in "non-production" service. Meaning if you have sensible ressources of non-production type, you can select a policy that handle off-site backup replication.
The backup panel has been improved in ITCare for all our PaaS products. It should be displayed on all products and more information have been displayed like the backup policies, the last backup timestamp and the storage footprint of the system (and database where applicable) in GB.
Object Stores are now indexed in the top search bar and can be found for easier nagivation to the detailled information page.
In september, we released the backup policy selection for Linux and Windows when provisioning. You can now select the policy for several PaaS when provisioning: OpenSearch, Redis, Apache Kafka, RabbitMQ, GlusterFS, Tomcat, Wildfly.
Also, we've improved the UI overview for those products to let you see the backup footprint and the associated policies configured.
The next and last delivery will be for the remaining products: PostgreSQL, MariaDB, SQL Server.
The addition of extra nodes for the Apache Kafka product is now possible on request via a ticket from ITCare. As a reminder, an Apache Kafka cluster consists of a minimum of 3 nodes for production use.
Kubernetes 1.28 is now available for provisioning in ITCare!
A new filtering option is now available in the resource section! The filters adapt dynamically based on the type of resource selected, offering a more personalized experience and optimized display of relevant data.
We are excited to announce that Redis 7.2.5 is now available on our ITCare platform. You can also request an upgrade to version 7.2.5 for an existing Redis PaaS by submitting a ticket.
You can now independently select your backup policy for Linux and Windows virtual instances when creating your resources in ITCare!
This feature is currently in Beta and accessible to everyone.
Oracle Linux distribution is now available in self service using ITCare, our Cloud management platform. This distribution is available as part of our virtual instances product with the same options and properties.
Bot Defense has been updated to introduce the transparent mode, allowing you to view requests deemed illegitimate without impacting traffic. This mode makes it easier to analyze logs and identify false positives, so you can fine-tune by adding legitimate IPs to the whitelist before switching to blocking mode.
OverDrive, based on Nextcloud technology, offers a file storage and sync platform with powerful collaboration capabilities with desktop, mobile and web interfaces. This new product is available in self-service through ITCare in sizing XS and is hosted on-premise with high security standards.
To simplify the load balancer creation, it is now possible to create a load balancer on your Kubernetes cluster directly from the Manage dropdown.
A new healthcheck has been added to all Kubernetes clusters where monitoring is enabled to monitor the Kubernetes API.
On all PaaS, the base operating system has been upgraded to the last version of the distribution when possible. This ensures that all new deployments will be up to date and benefit from security patches.
A new feature is available for OpenSearch clusters to migrate from a basic topology to a dedicated master topology. This migration will add 3 nodes dedicated to the Master role (not hosting datas).
Specialized nodes dedicated to Ingest role can now be added to your OpenSearch cluster. This new feature is available under the Manage dropdown and will add 2 new nodes dedicated to Ingest role.
This feature is only available for PostgreSQL 15 and above.
The OpenSearch PaaS has been updated to support and allow the provisioning of version 2.11.1.
The main menu of ITCare has been improved to provide a better navigation by reducing the page cascading.
The old notification has been scraped and replaced with a better one capable of handled very precise notifications rules. This lets you customize exactly which notifications you want to receive using subscriptions and to which recipients using email broadcast groups.
Your previous notification subscriptions have been retained.
It is now possible to select the Ingress provider you want for Kubernetes in the creation wizard. Ingress providers available are : NGINX, Istio and Traefik.
In ITCare, the Kubernetes display has been improved to display the Ingress role on your nodes as well as the version of the Operating System deployed on each nodes.
After improving the way URL are handled in ITCare, it is now possible to create your own HTTP URL checks directly from ITCare on your load balancer using the URL tab.
Version 12 of the Linux Debian distribution is available for deployment in ITCare. Automation has been improved and provisioning is now quicker.
Security enforcement previously available in Debian 11 is still applied but it is now optional and can be disabled in the wizard.
Resources can now be included or excluded in batch mode from your Service page. Also, we've improved the Patch status information with a dedicated panel in your resource overview to quickly see if your resource is patched and when will the next Patch Party happen.
Custom descriptions can be added to your resources and Services in ITCare to quickly identify your applications.
To help you navigate to your preferred resources, you can now add resources to your personal favorite list that you can can summon from the ITCare header.
A new tab in available in the Compute section to browse the Networks available for your cloud with their properties.
The maintenance calendar has been improved to display custom events specific to each customer. Private RFCs and events will now appear in the ITCare calendar for your cloud.
The Security section has been improved with a new entry point for Bot Defense & DoS Protection with its details regarding blocked requests and attacks.
This more restrictive profile can be rapidly deployed in the event of an attack on your load balancers. Its fine-tuning is designed to block a greater number of requests.
You can create up to 5 additional resources with the same configuration. The additional resources will be located in the same area but the availability zones can be different.
When displaying a virtual instance, you can now use this existing resource as a template to create a new resource with the same properties: CPU, RAM, network, storage, management options.
Please verify the amount of storage of the disk in the new template. Based on the source template, you might have the wrong amount of storage configured. This is a known bug!
The feature can save time to create a new virtual instance but will not CLONE anything.
When creating a resource, the network selection is now aware of the Service environment. If the Service has:
a Production environment, production networks are displayed by default.
a Non-Production environment, non-production networks are displayed by default.
In both cases, production and non-production networks remain available for selection.
Maintenances can now be scheduled on multiple or all resources at the service level.
New management actions are now available on clusters:
Resize: it can also be done at node level depending on the configuration of the product
Patch party: statuses for each resource are now visible in the service extended view
Upgrade to the next highest version is possible - jumping versions is not allowed
Downgrade is not possible
URLs related to a Load balancer can now be managed individually:
A dedicated URL tab is available in the Load balancer page
Following actions can be performed per URL or on a group of URL: Schedule Maintenance, Enable / Disable Monitoring
URL creation is available in the management actions of the Load balancer
You can explore the categorized API specifications and use the Test it feature in order to execute and test the endpoints.
Two kinds of authentifications are available : Bearer or Oauth.
When using the Test it option with Oauth2 , please :
Select the scopes : openid
and email
Enter the cliendId to be : cgdm-itcare-api-academy
Notes :
The parameters Cookies, Headers are optional.
The display of the popup after clicking to Test it may take several seconds.
To obtain an access token, the client must submit a request to the endpoint . The authorization server requires client authentication to issue an access_token. Here is an example of an access_token request:
Always On is now available in ITCare in self service for SQL Server 2022 Enterprise edition. For more information, please head over to the product information.
Also, you can upgrade your existing cluster to version 1.28 in self service. Remember to before launching an upgrade!
An interactive demo is available in the demo section:
For more information, please read the documentation.
For more information, please read the documentation.
An interactive demo is available here:
An interactive demo is available here:
Versions 3.12 and 3.13 are now supported by RabbitMQ PaaS ! For more information, please read the documentation and the official release notes.
Version 16 is now supported in the PostgreSQL PaaS ! For more information, please read the documentation and the official release notes.
You can now install PostgreSQL extensions on your PaaS in self-service via ITCare. The list of supported extensions is available in the documentation .
2022 is now available in self service in ITCare. Both Standard and Enterprise editions can be selected for provisioning as in previous versions.
New LTS version of is available for deployment in ITCare. Check the official MariaDB release notes for more information.
Version 3.6.0 of , the open-source platform for distributed event streaming, is now available for deployment via ITCare. This new version uses the Raft consensus algorithm and does away with Zookeeper.
Version 15 of is available for deployment via ITCare. It is compatible with all the features previously offered.
The self-service restore feature, which lets you restore a backup from one source to another destination, is now available in ITCare.
A new "Strict" mode has been added to the feature.
Single instance can now be converted to a highly available PostgreSQL cluster with two nodes. This operation is not reversible.
It is now possible to upgrade your cluster to the last supported version from the manage menu of the cluster:
To use Bearer checkout the section.
It is not possible to create your own ITCare account to access the platform.
To receive an ITCare account, your organization's security representative must submit an account creation request.
Please contact your Service Delivery Management or the commercial team at cegedim.cloud.
ITCare authentication is based on an e-mail address and a password that comply with the standards of the cegedim security policy.
API accounts use the OpenID protocol. More information about the ITCare API can be found at Authentication.
Multi-factor authentication is available and mandatory for certain high privilege actions.
During the on-boarding process, you will be provided with all the information necessary to properly configure the MFA.
ITCare privileges are broken down into roles assigned to profiles.
Profiles are assigned to users.
See resources
See all the resources and their informations. Read only
Manage maintenances
Ability to manage maintenances
Modify resources
Ability to modify resources except creation and deletion
Manage resources
Complete resource management
MFA must be configured and is mandatory for the following roles:
Manage maintenances
Modify resources
Manage resources
Standard (STD)
Maintenance (DTM)
Operator (OPE)
Power (POW)
This non-exhaustive table describes the basic actions allowed by profile:
Bodies
create-instance
POW
start-instance
OPE
stop-instance
OPE
reset-instance
OPE
resize-compute-instance
OPE
delete-instance
POW
Instance monitoring
enable-monitoring-instance
OPE
disable-monitoring-instance
OPE
Snapshot of instances
create-snapshot
MNT
recover-snapshot
MNT
delete-snapshot
MNT
DNS aliases of instances
create-dns
OPE
delete-dns
OPE
LoadBalancers
create-lb
POW
start-lb
OPE
stop-lb
OPE
delete-lb
POW
Monitoring of LoadBalancers
enable-monitoring-lb
OPE
disable-monitoring-lb
OPE
Manage LoadBalancers
add-member-lb
OPE
delete-member-lb
OPE
update-member-state
OPE
DNS alias of LoadBalancers
create-dns-lb
OPE
delete-dns-lb
OPE
Manage maintenance
create-maintenance
MNT
delete-maintenance
MNT
Indicators
create-indicator
POW
update-indicator
POW
delete-indicator
POW
SMS
subscribe-vortext
POW
Storage Object
create-object-stores
POW
update-object-stores
OPE
delete-object-stores
POW
Storage Object - Users
create-user-objectstores
POW
update-user-objectstores
POW
delete-user-objectstores
POW
K8S Clusters
create-cluster
POW
create-cluster-namespace
OPE
delete-cluster-namespace
OPE
create-cluster-nodes
POW
delete-cluster-nodes
POW
The topology of the cegedim.cloud hosting platform is divided into:
Regions: a group of low latency data centers ( < 1 ms)
Availability zones: a set of dedicated infrastructure components in a data center
Here is the list of regions available to our customers:
EB
Paris area
EB4 : Boulogne-Billancourt
EB5 : Magny-les-Hameaux
ET
Toulouse area
ET1 : Labège
ET2 : Balma
EB-HDS-A
Client zone
EB4
EB-HDS-B
Client zone
EB4
EB-HDS-C
Client zone
EB5
EB-A
Area reserved for the cegedim group
EB4
EB-B
Area reserved for the cegedim group
EB4
EB-C
Area reserved for the cegedim group
EB5
ET-HDS-A
Client zone
ET1
ET-HDS-B
Client zone
ET1
ET-A
Area reserved for the cegedim group
ET1
ET-B
Area reserved for the cegedim group
ET1
A resource is an infrastructure or middleware component deployed in the cegedim.cloud Information System.
It can only belong to one Service (see How are my ITCare resources organized? for the definition of a Service)
A resource is systematically defined by the following properties:
an id: unique identifier of the resource.
a type: the type of the resource e.g. virtual instance, Kubernetes cluster, etc.
a name: more convenient to handle than an id.
a status: defines the state of the resource (active, inactive).
an environment: defines the type of environment of the resource (production, qa, dev, test, etc.).
tags: allows you to tag your resources with customizable keys/values that are queryable.
Here are the possible statuses of a resource that are visible by the web UI or returned by the API:
Active
The resource is active and the service is available.
ACTIVE
Preparation
The resource is being installed or configured. The service is not yet available.
PREPARATION
Inactive
The resource is inactive and the service is unavailable.
INACTIVE
Each cegedim.cloud customer has an Organization that materializes its existence within our IS.
Multiple Clouds can be created within an organization. These allow partitioning of resources and user rights.
You can therefore define, at the level of a Cloud, who has access to what and what actions can be performed.
It is therefore possible, for example, to have a Cloud that gives full power to your development teams so as not to disrupt production. Within a Cloud, resources are then grouped into Services.
The Services allow you to group your resources in a logical way according to several free criteria:
The scope of an application
By environment
Any other free criteria: by customer for example
The Services do not allow the application of user rights restrictions.
In ITCare, the Services have dedicated pages that allow you to easily consult all the resources attached to them.
ITCare API uses conventional HTTP response codes to indicate the success or failure of an API request.
As a general rule:
Codes in the 2xx
range indicate success.
Codes in the 4xx
range indicate incorrect or incomplete parameters (e.g. a required parameter was omitted, or an operation failed with a 3rd party, etc.).
Codes in the 5xx
range indicate an error with ITCare's servers.
This table shows more exemples about HTTP response codes
200
Request successfully processed
Varies depending on what was
201
Successfully created object
Object created
202
Order of creation of the object successfully processed, the request will be processed asynchronously
Empty or tracking object describing the processing of the asynchronous request
400
Bad query - Syntax or consistency error in the query. Must be corrected by the issuer
Blank or indication of the error to be corrected on the client side
401
Unauthenticated access to the resource
Empty
403
Unauthorized access
Empty
404
Non-existent resource
Empty
409
Conflict
Empty
422
Inconsistent data
Empty
500
Fatal API error
Empty
503
Service temporarily unavailable
Empty
ITCare also outputs an error message and an error code formatted in JSON:
How does cegedim.cloud handle patching servers ?
cegedim.cloud ensures all servers are patched during events called Patch Parties.
These events happen every quarter on Sundays, so 4 times per year.
During a Patch Party, patchs are installed on all servers not excluded from the events then followed by a reboot.
Services will be interrupted during a Patch Party!
Patch Parties happen every quarter. Two Patch Parties are scheduled every quarter:
QA Patch Party: happens first during business hours on Thursdays and is only applicable to non-production environments.
Production Patch Party: happens 3 weeks after the QA Patch Party on Sundays and is only applicable to production environments.
On every resource details page, a panel named Patch status lets you review:
The last time the resource was patched successfully
Timestamp of the upgrade
Patch tag of the upgrade with this format: YYYY-QQ (e.g. 2022-Q4)
The next Patch Party scheduled if the resource is not excluded
The person who excluded the resource and when if the resource is excluded
On every resource details page, a Patch Party button lets you include or exclude the resource from all future Patch Parties.
Excluding a resource requires a reason explaining why. Including lets you select the Patch Group which will affect when your resource is effectively patched during the patching day.
In the same Patch Party button as described above, you can change the desired Patch Group at any moment.
3 Patch Groups are available to split your resources. Each group be will handled at different times.
This is useful if you don't want multiple resources to be patched (and interrupted) at the same time, thus, improving your application's resiliency.
How does cegedim.cloud support its managed products?
cegedim.cloud's managed services include support. This support is divided into 4 phases:
Standard phase: cegedim.cloud offers full product support.
End-of-sale phase: secondary phase indicating that a new version has been elected in the Standard support phase. We strongly advise you to migrate to a more recent version.
Extended support phase: third phase, starting at the end-of-life (EOL) date announced by the publisher for the product. This means that many services are no longer guaranteed, and switched to best efforts.
End of Support phase: terminal phase activated when cegedim.cloud is no longer able to provide support. Charges may apply if the system is considered a security risk (breach, data compromise, need for isolation).
Here is a detailed listing of features by support phase:
Technical incident supervision & support
Standard requests
Guaranteed restoration time
24x7 Support
Data backup, restoration and Geo-replication
Disaster recovery
Managed cybersecurity
Quarterly security patches and minor updates
Critical security patches and updates
Deployment via ITCare
From our could platform manager ITCare, you can create a support request ticket. Either directly from the homepage or from the Support section in the left-hand side menu. Click on the button Make a request.
First, search for the form corresponding to the product you need assistance with by typing the product name in the search bar. Select the form and provide the required information.
A support ticket will be created upon submission.
Managed resources are monitored by cegedim.cloud if the Monitoring has been enabled. However, an incident form is available in our cloud platform manager ITCare if you need let us know about an issue that we would have missed.
From the homepage or the Support section of the left-hand side menu, click on the Report an incident button.
To better process your incident, a severity level is required:
No impact
Degradation
Service disruption
Look for the proper form by searching the product name you have an issue with then provide the required information.
An incident ticket will be created, and our support team will contact you as soon as possible.
Several Cloud Native products are available in the cegedim.cloud catalog.
They are listed by category in the sections below, for a quick overview and the ability to navigate directly to the associated public documentation.
The vast majority of these products are available in self-service via ITCare, our cloud platform management tool.
For more information on our commercial offers, please visit our official website.
They are highly customizable and resizable. They offer the greatest flexibility when a product is not available in platform-as-a-service (PaaS) mode.
The following operating systems and distributions are supported by cegedim.cloud :
Linux
Debian
Ubuntu
Centos
Red Hat Linux Enterprise (RHEL)
Windows Server
AIX
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It is provided as a managed service by cegedim.cloud and benefits from the infrastructure, operations, and support we provide.
This includes features like scaling, monitoring, security patching, and simplified deployment, making it easier for developers and teams to focus on application development and avoid the complexities of managing the underlying infrastructure.
Depending on the number of pages tracked per month, several components will be deployed and managed by cegedim.cloud, allowing you to focus on using and configuring Matomo.
This service is not available on a self-service basis and requires contact with the cegedim.cloud sales team.
It can be individually activated on your load balancers directly from our cloud management platform, ITCare.
This service is not available on a self-service basis and requires contact with the cegedim.cloud sales team.
This service is not available on a self-service basis and requires you to contact the cegedim.cloud sales team.
This service can be activated on a self-service basis via our ITCare cloud management platform.
With its ability to scale and handle large amounts of data, GlusterFS provides benefits such as improved storage capacity, fault tolerance, and high availability for various applications and workloads.
GlusterFS is a self-service available for consumption via our ITCare cloud management platform.
The benefits of an S3 object storage solution include high availability, low-cost storage, flexible access controls, automatic data redundancy, and the flexibility to choose different providers and avoid vendor lock-in.
Matomo (formerly called Piwik) is an open source measurement software that provides statistics of data on the use of a web page, such as visits, page views, origin of visits and much more.
Matomo offers the possibility of having a complete analysis on certain aspects of your websites with information regarding : your visitors, their behavior, patterns and more.
Matomo is an excellent replacement for Google Analytics for the following reasons :
100% Data Ownership
Privacy Protection
No Data Sampling
GDPR Compliance (Recommended by the CNIL)
Flexibility (data portability, raw access, open source, etc..)
Matomo is deployed on-premise in cegedim.cloud 's data centers and is provided as a service.
cegedim.cloud guarantees the following level of managed service : deployment of instances, maintenance in operational condition, flexibility, security and monitoring are thus ensured by our experts.
Billing occurs monthly and varies based on the sizing selected.
Please contact your Service Delivery Manager for more information regarding pricing.
With the exception of AIX, all managed are available for self-service consumption using our cloud management platform, ITCare.
are dedicated Kubernetes clusters available for self-service consumption in our ITCare cloud management platform.
is a available for self-service consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
is a self-service available for consumption via our ITCare cloud management platform.
The solution is a vulnerability analysis and detection service for your resources exposed to the Internet.
is a protection feature against malicious bots and distributed denial-of-service attacks.
The service is tailored to assess the effectiveness of e-mail security awareness.
The solution transforms sensitive data contained in databases into less sensitive data, according to your own specifications and needs.
The based advanced monitoring solution gives you access to detailed dashboards on your resources.
is an open-source distributed file system that allows for scalable and high-performance storage across multiple servers. It aggregates storage resources from multiple machines into a single storage pool, offering a single namespace for easy management and access.
cegedim.cloud's S3-compatible (Simple Storage Service) solution is a cloud-based storage service. It provides scalable, secure, and durable storage for various types of data, including files, images, videos, and backups.
For more information, please visit .
To request the addition of a MariaDB read-only replica, you need to create a request ticket via ITCare.
In the Support section of the left-hand side menu, click on the Make a request button. In the Database category, then MariaDB, select and fill in the form: Setting up passive MariaDB replication.
A support ticket will be created upon submission.
Instances
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Storage (per instance)
10 - 2048 GB
10 - 2048 GB
Supported Version(s)
10.11
10.6
10.11
10.6
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Instances
3 - 5+
CPU (per instance)
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
Stockage (per instance)
100 - 8000 GB
Supported Version(s)
2.*
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.9%
Multi-AZ deployment
Instance
1
2
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Stockage (per instance)
10 - 2048 GB
10 - 2048 GB
Supported versions
10 to 16
12 to 16
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Instance(s)
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Stockage (per instance)
10 - 2048 GB
10 - 2048 GB
Supported version(s)
7.2
6.2
7.2
6.2
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
TLS/SSL
Availability
99.8%
99.9%
Multi-AZ deployment
Instance(s)
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Storage (per instance)
10 - 4096 GB
10 - 4096 GB
Supported version(s)
2022
2019
2017
2016
2022
2019
2017
2016
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Brokers
3+
Controllers
3
CPU (per broker)
2 - 16 vCPU
RAM (per broker)
4 - 384 GB
Stockage
40 - 1024 GB
Supported version(s)
3.6.0
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.9%
Multi-AZ deployment
Instances
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Stockage (per instance)
10 - 2048 GB
10 - 2048 GB
Supported version(s)
3.13
3.13
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Instance(s)
2
CPU (per instance)
2 vCPU
RAM (per instance)
4 GB
Supported version(s)
10.2
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.9%
Multi-AZ deployment
S3 compatibility
Geo-replication
Quota
Object Lock
File lifecycle
Presigned URLs
Sizings
XS
S
M
L
XL
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
Region selection
Self-service
The following Linux distributions can be hardened during provisioning:
Debian starting version 11
Ubuntu starting version 22.04
Oracle Linux starting version 9
Some weak filesystems are disabled in the kernel
Separate mount points for very active filesystems: /var/log, /var/log/audit, /var/tmp
Protection of /var/log, /tmp and /var/tmp
Disabling removable storage
Ensure root password is required to boot in rescue mode
Tracing of every usage of sudo command
Several parameters are activated in kernel to protect running processes
Unnecessary or weak network services are disabled (enforced by configuration manager)
Ensure time service is configured and active
IPV6 is disabled
Several kernel parameters are set to protect network
Disable uncommon network protocols
Centralization of system logs
Ensure that every event is logged
Ensure cron service is active and configured
Ensure cron directories are protected
Ensure ssh is active and configured
Force ssh secure protocols and parameters
Ensure idle sessions deactivation
Ensure strong password rules are applied
Ensure sensitive authentication files are protected
Sharing responsibilities
To have a common understanding about the responsibilities and duties between cegedim.cloud and the customer, we use a RACI matrix.
R
Responsible
Assigned to complete the task or deliverable
A
Accountable
Has final decision-making authority and accountability for completion (only 1 per task)
C
Consulted
An adviser, stakeholder, or subject matter expert who is consulted before a decision or action
I
Informed
Must be informed after a decision or action
Below is the RACI matrix describing actions related to managed products from cegedim.cloud's catalog.
There are slight differences according to the plan subscribed by the customer :
Self Service
The customer can create resources directly through ITCare, using self-service and pay-per-usage.
On Request
Resources are provisioned and delivered by cegedim.cloud on request by the customer.
Create, Stop, Start, Delete or Resize an instance or a cluster
Self Service
I
A / R
The decision of provisioning / stopping / starting / deleting a deployment and associated parameters is done by the customer.
The actions are performed :
by customers through ITCare if they have subscribed to the "On Demand" service
for others customers, by cegedim.cloud's Professional Services team
Use an instance or a cluster
*
I
A / R
Customer is responsible of the healthy usage of the product.
Modify configurations
On request
A / R
I
Certain configuration parameters can be modified at the customer's request.
Standard Monitoring
*
A / R
I
Monitoring is mandatory, and accessible to customer through ITCare.
Performance metrics
*
R
I
Performance metrics are provided by default and reachable through ITCare.
Backup and Restoration
*
R
A / I
Backup policy is defined by customer and applied by cegedim.cloud, which is responsible of ensuring that backups are done, and restoration of data when requested.
Customer has information about the backup in ITCare.
Disaster Recovery Protection
*
R
A / I
Disaster Recovery is activated by customer and applied by cegedim.cloud, which is responsible of ensuring that associated RTO and RPO are reached.
Customer has information about the Disaster Recovery Protection in ITCare.
Security Patches
*
R
A / I
Cegedim.cloud passes security patches in the execution environment, quarterly, during "Patch parties", by default.
Version Upgrades
On Request or Self Service
R
A / I / R
Upgrade can be done by the customer from the ITCare in autonomy when possible OR a request can be issued by the customer, and if the transition is possible, cegedim.cloud will upgrade or update the product version.
Some of our products have specific actions that can be carried out autonomously and in self-service from our ITCare cloud management tool. The matrices below are therefore complementary to the generic RACI matrix.
Add a Kubernetes node
Self-service
I
A / R
Customer can add Kubernetes nodes in self-service using ITCare.
Resize Kubernetes nodes
Self-service
I
A / R
Customer can resize Kubernetes nodes in self-service using ITCare.
Remove a Kubernetes node
Self-service
I
A / R
Customer can remove a Kubernetes node in self-service using ITCare.
Enable HA mode
Self-service
I
A / R
Customer can enable High Availability on a Kubernetes cluster in self-service using ITCare.
Add a MariaDB read-only Replica
On Request
A / R
I
On request, a read only MariaDB replica can be configured for a standalone MariaDB node.
Index management
*
I
A / R
Customer is responsible of the creating and managing his indices. cegedim.cloud do not have access to them except for the security_audit index.
Restore source PostgreSQL on a destination (seed)
Self-service
I
A / R
The decision to restore a PostgreSQL farm to another PostgreSQL farm is made by the client. The actions are carried out:
through ITCare if they have subscribed to the "On Demand" service.
by the Professionnals Services team at cegedim.cloud
Convert to High availability
Self-service
I
A / R
The decision to restore a PostgreSQL farm to another PostgreSQL farm is made by the client. The actions are carried out:
through ITCare if they have subscribed to the "On Demand" service.
by the Professionnals Services team at cegedim.cloud
Manage Apache Kafka objects
*
I
A / R
Customer is responsible of the Apache Kafka objects management (topics, partitions, etc..) and its healthy usage.
Manage RabbitMQ objects
*
I
A / R
Customer is responsible of the RabbitMQ objects management (exchanges, queues, etc..) and its healthy usage.
Enable / Disable Bot Defense option on a Load Balancer
Self-service
I
A / R
The decision of enabling / disabling the Bot Defense option is done by the customer.
Add or delete Whitelisted IP
Self-service
I
A / R
Customer can add or delete whitelisted IP.
Access to DDOS and blocked requests from Bot Defense and Dos Protection
Self-service
I
A / R
Report in real time blocked request (Including blocked ip, blocking reason and the support ID).
Request details on blocked request
On Request
A / R
I
Upon request by the customer, more information can be provided for a blocked request by providing the support ID
Designate a champion and define data masking objectives
*
I
A / R
Define the context of the masking
*
I
A / R
Identify sensitive data to be masked (specifications)
*
I
A / R
Identify data integrity constraints within the database
*
I
A / R
PDM : discover and sensitive data tag
*
A / R
I / C
PDM : Masking rules and masking policy definition
*
A / R
I / C
PDM : Optional : custom rules and dictionaries implementation
*
A / R
I / C
PDM : Masking plan creation and execution*
10 anonymization treatments included
12 months subscription
Options: Package of 10 additional anonymization treatments to use within the subscription period
*
A / R
I / C
Results verification and masking effectiveness validation
*
I
A / R
*Each execution includes: prerequisite check, script execution, monitoring of the execution by an IT security expert in direct contact with the customer
Manage storage volumes
Self-service
I
A / R
Customer is responsible of the management (creation, deletion, resize) of the storage volumes for his cluster.
Create an Object Store
Self-service
I
A / R
The decision of provisioning / Deleting / modify an Object Store and associated parameters is done by the customer.
The actions are performed :
by customers through ITCare if they have subscribed to the "On Demand" service
for others customers, by cegedim.cloud's Professional Services team
Manage Object Store Quota
Self-service
I
A / R
Delete a Object Store
Self-service
I
A / R
Create an Object User
Self-service
I
A / R
The decision of creating an Object User and associated parameters is done by the customer.
The actions are performed :
by customers through ITCare if they have subscribed to the "On Demand" service
for others customers, by cegedim.cloud's Professional Services team
Manage Object Users
Self-service
I
A / R
The decision of modify an Object User and associated parameters is done by the customer.
These actions include the Secret Key renewal or Object User locking.
The actions are performed :
by customers through ITCare if they have subscribed to the "On Demand" service
for others customers, by cegedim.cloud's Professional Services team
Delete Object Users
Self-service
I
A / R
The decision of Delete an Object User and associated parameters is done by the customer.
The actions are performed :
by customers through ITCare if they have subscribed to the "On Demand" service
for others customers, by cegedim.cloud's Professional Services team
Create Bucket
Self-service
I
A / R
Bucket creation and associated parameters is done by the customer.
The actions are performed using the S3 API.
Delete Bucket
Self-service
I
A / R
Bucket deletion and associated parameters is done by the customer.
The actions are performed using the S3 API.
Manage Bucket Policy
Self-service
I
A / R
Bucket Policy management is done by the customer.
The actions are performed using the S3 API.
Manage Lifecycle Configuration
Self-service
I
A / R
Lifecycle Configuration management is done by the customer.
The actions are performed using the S3 API.
Manage Object Configuration
Self-service
I
A / R
Object Lock configuration on Bucket or object is done by the customer.
The actions are performed using the S3 API.
Availability and Monitoring
*
R / A
I
cegedim.cloud will ensure the Object Storage Service is globally available and healthy at all times.
Multi Region Replication
*
R / A
I
Data replication between region is done by cegedim.cloud, which is responsible of ensuring that associated RTO and RPO are reached.
Customer has information about the Disaster Recovery Protection in ITCare.
Security Patches
*
R / A
I
cegedim.cloud apply security patches. it is transparent for customers and this does not lead to an interruption of service.
Version Upgrades
*
R / A
I
cegedim.cloud apply upgrade patches. it is transparent for customers and this does not lead to an interruption of service.
S3 API may change.
Virtual Instance is a fully managed product offered by cegedim.cloud, designed to simplify and enhance your hosting experience. With Virtual Instances, you no longer need to worry about the complexities of managing your hosting infrastructure – our expert team takes care of it all.
Our product supports a variety of operating systems such as Linux, Windows, and AIX, giving you the freedom to choose the environment that best suits your needs. It can be effortlessly deployed through our user-friendly cloud platform management tool called ITCare.
This empowers you with the flexibility to spin up your desired instance, select your preferred operating system and customize resources to meet your specific needs with a few clicks, ensuring that your instances are tailored to match your business requirements precisely.
We understand the importance of performance and reliability, which is why Virtual Instances come equipped with tailored monitoring systems, backup services and data replication. This ensures real-time visibility into the health of your instances and added security.
In summary, whether you need a Linux, Windows, or AIX operating system, and regardless of your resource requirements, Virtual Instances provide the flexibility and scalability you need.
Resource configuration can vary based on the target operating system.
Billing is processed monthly and based on the number of instances and additional costs for storage, backup, 24x7 monitoring.
Cost estimation for a Virtual Instance is available via your Service Delivery Manager.
cegedim.cloud provides managed Kubernetes clusters with built-in highest level of security and resilience.
By using those clusters, you can deploy your standard Kubernetes workloads across cegedim.cloud Availability Zones and data centers to maximize your applications' availability.
cegedim.cloud also provides a console, powered by Rancher, where you can manage your workloads and configure built-in Observability capabilities (Logging and Metrology) to connect to your own platform (Grafana, ElasticSearch, etc...).
The main objectives of the cegedim.cloud Container as a Service product are :
Ability to provide latest generation Kubernetes clusters on demand
Support for StateFul Application
Support for persistent volumes with Auto-Provisioning and High Availability
Respecting network standards, storage and security rules.
Strong security
Monitoring and metrics system built-in on demand for each application
Support for dynamic network rules
Billing is processed monthly and based on the number of nodes plus any supplementary costs for storage and backup.
Cost estimation for a Kubernetes cluster is available via your Service Delivery Manager.
Kubernetes has built-in features & mecanisms to keep healthy kubernetes nodes and workoads:
kube-scheduler decides on which nodes to place pods in function of pod requested resources and node unreserved resources.
kubelet Out-Of-Memory kills pods that consumes more resources than limited values defined in the spec (OOM killed).
For any reason, if the node is run out of resources, kubelet evicts pods to relieve the pressure on the nodes (pod eviction). Pod eviction decision is based on QoS of pods.
Keep in mind that cegedim.cloud provides standard Kubernetes clusters with these features and qualified the official Kubernetes documentations below:
The problem is in real life application:
not all technologies are natively container friendly
resource usage metrics collected by kubelet (or node exporter, etc.) is not real time
resource usage metrics are not taken into account by kube scheduler
kubelet as a Linux process is not always the most prioritized process, especially when nodes run out of CPU.
Failing to handle resources stresses on nodes by kubelet leads to node failures and the redeployment of all workloads related. In worse case, a domino effect on node faillure can happen.
cegedim.cloud provides a hardening solution called cgdm-hardening:
One pod hardening-slave per worker nodes: writes CPU & RAM consumption to centralized database
One pod hardening-master deployed on master nodes: reads metrics from database and takes action in case of crisis
Hardening stack has a very low resource footprint
Hardening-master pod can take action in two modes:
Preventive mode (as a kube scheduler assistant, default mode): puts the taint cegedim.io/overload=true:NoSchedule to avoid placing more pods on under-pressure nodes (85% RAM or 90% CPU). When CPU is below 85% and RAM is below 80% taint will be removed.
Protective mode (as a kube controler assistant): when RAM consumption reach 95%, kills newest pods, ones after anothers, to relieve the pressure. It is not activated by default
You should never use wildcard toleration on applications, otherwise preventive effect of this solution will be invalid.
Limitation: Node faillure due to extremly high peak of CPU during very short period of time can not be mitigated with this solution.
New Kubernetes clusters wil be provisioned with the preventive hardening activated.
If workloads deployed by customers create a lot of node failure (TLS_K8_NODES), the protective mode will be activated.
Customer can disable this hardening by creating an ITCare request ticket. This means customer will have to reboot the nodes themself in case of crisis.
Customer can re-enable this hardening by creating an ITCare request ticket any time.
There are 3 possible topologies of K8s cluster provided by cegedim.cloud:
Standalone: workloads are deployed in a single data center with no disaster recovery plan.
Standard: workloads are still deployed in a single data center, but protected against data center disaster using a secondary data center as failover.
High Availability: workloads are deployed in two datacenters and no interruption of services when data center disaster can be obtained with well distributed multi-replicas deployment.
cegedim.cloud provides a compute topology based on :
Region : a pair of data centers
Area : infrastructure network isolation between tenants
Availability Zones : inside an area, isolated infrastructure for Compute and Storage
Kubernetes clusters can be deployed using 2 topologies :
Based on your requirements in terms of RTO and costs, you can choose the best topology for your needs.
cegedim.cloud uses standard topology keys :
Since Kubernetes > 1.20 failure-domain.beta.kubernetes.io/zone is deprecated but still remained available if pre-existing.
Here is the list of the components and tools that are deployed in a standard delivered cluster :
Here is a figure with all network components explained :
Two pods of 2 namespaces that belong to the same Rancher Project can fully communicate between them.
Two pods of 2 namespaces that belong to two different Rancher Project cannot communicate unless user defines a Network Policy dedicated for this need.
Pods from Rancher Project named System can communicate to pods from other Rancher Projects.
Pods can only send requests to servers of the same VLAN, unless a specific network opening rule is configured between the two VLANs.
Pods cannot send requests to Internet unless, a proxy is setup inside the pod or specific network opening rule is configured for the related VLAN.
Requests toward kube api-server can be reverse-proxied by Rancher URL.
Workload hosted by pods cannot be directly accessible from outside of K8S cluster, but via ingress layer for HTTP protocol or via a NodePort service for TCP protocol with a respective Load Balancer.
nginx is the ingress controller deployed to expose your workloads. You can find relevant documentation on official Github.
Two ingress controllers are deployed :
One exposing to internal Cegedim Network
nginx ingress controller
listening on every worker node on the port :80
this is the default ingress class (no ingress class needs to be specified)
One exposing to internet \
nginx ingress controller - you can request to have nginx external ingress controller
listening to every worker node on the port :8081
this ingress class is : nginx-ext
using the annotation : kubernetes.io/ingress.class: "nginx-ext"
A K8s cluster comes with :
An Elastic Secured Endpoint, managed by F5 appliances, exposing the K8s workload to the cegedim internal network (once you're connected to Cegedim LAN, either physically or through VPN)
A *.<yourclustername>.ccs.cegedim.cloud DNS resolution to this endpoint
A *.<yourclustername>.ccs.cegedim.cloud SSL certificate configured
You can use ITCare in case of a need of a specific configuration :
Exposing your workloads to the Internet or private link
Using a specific FQDN to deploy your workload
Using a specific certificate to deploy your workload
Using Traefik as Ingress Provider instead of nginx
Adding other Ingress Providers
Accessing resources outside of cluster
cegedim.cloud now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:
Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.
Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.
Only CSI Ceph RBD is provided for the moment.
Further information on Ceph CSI can be found here:
cegedim.cloud uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.
All information about this application can be found here:
We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:
To list all CSI available in a Kubernetes cluster, perform the following:
Here is a mapping between Storage Class and CSI:
Connect to ITCare platform, click on the Analytics button in the main menu on the left
Click on "Create a Matomo Instance" and follow the instructions.
Provide the global service in which you want to create a Matomo instance into.
Give the name of the instance you want to create.
A default website name and URL can be configured during provisioning. It's optional. If left empty, dummy values will be used. Click Next.
Pick the sizing of the Matomo Analytics instance matching your needs and click Next:
Provide the password of the super user named "administrator" that will be given to you.
The super user's password is not saved by cegedim.cloud.
Make sure to save your password!
At the next step, you can configure management options :
Monitoring (highly recommended)
24/7 monitoring
Backup (highly recommended)
Storage replication
Click Next.
Select the region in which you want to create your Matomo instance. Click Next.
Verify your inputs in the synthesis page. You can :
Check the instance that will be created
See the target global service
Save your administrator password.
Verify the management options.
Click Submit when ready to submit.
Once the instance is ready, you will be notified by email with the information required to connect to it. Instance creation can take up to 2 hours based on the current load on automation.
The instance will then be displayed in the management page in the Analytics section.
Go to the "Analytics" menu from the left main menu and click on the manage link. Once presented with all the Matomo instances, click on the Start button of the instance of your choice.
Starting a Matomo instance will start all components.
An email notification will be sent when the service will be activated.
Go to the "Analytics" menu from the left main menu and click on the manage link. Once presented with all the Matomo instances, click on the stop button of the instance of your choice.
Input an RFC number for tracking (optional) then submit by clicking on Stop.
Stopping a Matomo instance will stop all associated components and monitoring will be disabled.
An email notification will be sent when the instance is stopped.
Go to the "Analytics" menu from the left main menu and click on the manage link. Once presented with all the Matomo instances, click on the delete button of the instance of your choice.
This action will delete all components used by that Matomo instance.
Please note that this action is not recoverable.
Input an RFC number for tracking (optional) and the instance name (mandatory) to confirm your choice then click Delete.
An email notification will be sent when the instance is removed.
Go to the "Analytics" menu from the left main menu and click on the manage link. Once presented with all the Matomo instances, click on the resize button of the instance of your choice.
Select the new size that can only be higher than current one and click on Resize.
Service will be interrupted.
An email notification will be sent when the instance is resized.
Connect to your Matomo Analytics instance with the "administrator" user you provided during provisioning.
Click on the Administration icon :
Select Extensions menu
Go at the end of the page and click on the "Install new components" button:
Look for, install and activate modules at your own discretion:
Connect to your Matomo Analytics instance with the "administrator" user.
Click on the adminitration icon
Click at the top of the page on the message indicating a new version
Confirm the automatic upgrade
Wait until the end of the upgrade with success message
The following Linux distributions are available when selecting Linux as an operating system for your Virtual Instance:
Centos
Debian
Ubuntu
Red Hat Linux Enterprise (RHEL)
Oracle Linux
Hardening is applied on some recent Linux distributions such as Debian 11, Debian 12, Ubuntu 22 and Oracle Linux 9. Here are the different parts of the system concerned by the hardening:
Enforcement of network security parameters
Protection of sensible file systems
Limitations of connection to methods using solid protocols and enforced crypting schemes within ssh
Disabling of dynamic kernel module loading
Windows Server is available as an operating system for your Virtual Instance. cegedim.cloud supports multiple versions from Windows Server 2022 to Windows Server 2012R2.
cegedim.cloud support IBM AIX operating system on IBM Power Systems. Currently supported version is major version 7 and associated Technological level versions.
Virtual Instances are available in the following cegedim.cloud data centers:
EB4 - Boulogne-Billancourt, France
EB5 - Magny-les-Hameaux, France
ET1 - Labège, France
ET2 - Balma, France
A Virtual Instance can be configured and customized to your needs regarding:
Compute: number of vCPUs
RAM: quantity of memory allocated (will vary based on the number of vCPUs)
Storage: additional disks and storage in GB to allocate to the virtual instance
This section is to list which feature / capabilities are available to customer, and how to request / perform them :
Authentication to the virtual instance is Active Directory based for Linux and Windows (not AIX). The requesting user will automatically be added as an administrator of the Virtual Instance. This user is then free to configure and add more users with the desired privileges.
Backup is an option that can be enabled for your Virtual Instance. In a production environment, the backup option will always be toggled on, by default, in ITCare.
You can disable the backup option at your own risks.
Backup are taken every day and saved in the local data center then replicated to a second data center in the campus. Backup retention for virtual instances is 28 days by default but can be customized with your Service Delivery Manager to suit your needs.
The date of the last backup and the backup storage footprint can be seen directly in ITCare in the resource details page of your Virtual Instance.
As featured, virtual instances are managed and thus monitoring is provided if the option has been toggled. In a production environment, the monitoring toggle is always activated by default.
You can disable the monitoring option at your own risks.
By activating the monitoring, multiple health-checks will be deployed to ensure your virtual instance is running and stays healthy. If any of those checks are triggered, our support team is notified with a ticket to solve the issue in the allowed service agreement level.
These monitoring alerts can be consulted directly in ITCare and metrics of performance are also provided for key indicators like CPU, memory, disk and network consumption.
When the monitoring is enabled, it is only effective during Business Hours. To extend the monitoring outside Business Hours, the 24x7 option can be enabled and additional fees will apply.
This ensure that your Virtual Instance is monitoring at all times by our Support team and actions will be taken to escalate and solve any issue.
Data replication is a feature which enables the Disaster Recovery protection. When the feature is enabled, the datas of your Virtual Instance will be replicated from the local storage array to an offsite storage array.
This means that in the event of losing the local data center, your datas are still safe in another data center and the procedure to restore and revive your Virtual Instance can be activated.
You can disable the replication option at your own risks.
Matomo is available in self-service in ITCare for all authorized users.
Based on the sizing selected (see Sizings below), multiple components will be deployed with a bare minimum of :
1 front-end web server
1 back-end database server
1 load balancer with a public facing IP (certificate included)
Higher sizings (to track more pages viewed per month) can include more web servers for load balancing. For sizing higher than XL a request ticket will be required.
Once provisioned, your Matomo instance is publicly accessible through the URL provided in ITCare.
Plugins installation and Matomo upgrades can be done by the customer in autonomy.
Matomo is available in the following regions:
EMEA - France - Boulogne-Billancourt
EMEA - France - Toulouse
The following sizings are available:
Resize is available in ITCare self-service but only allows resizing UP.
Scaling down is not currently possible.
The version of the Matomo deployed depends on the last patch party.
The last version will often be available but you could have some cases where the version is lagging a bit.
In this case, the update can be either triggered by the customer in autonomy directly from the web UI interface or on request by creating a request ticket in ITCare.
This section is to list which feature / capabilities are available to customer, and how to request / perform them:
Customer is provided with a super user local account.
OpenOIDC is configured to easily add and grant other users access to Matomo.
Access to your Matomo instance is done securely through HTTPS.
All datas are stored in cegedim.cloud data centers on encrypted storage arrays.
The password of the super user account provided to the customer is not stored nor saved by cegedim.cloud.
Matomo and associated components are monitored by our support team.
A global health status is displayed in ITCare for your convenience.
Managed OpenSearch
OpenSearch is a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2.
It consists of a search engine daemon, OpenSearch, and a visualization and user interface, OpenSearch Dashboards. It enables people to easily ingest, secure, search, aggregate, view, and analyze data.
These capabilities are popular for use cases such as application search, log analytics, and more. OpenSearch will continue to provide a secure, high-quality search and analytics suite with a rich roadmap of new and innovative functionality.
OpenSearch Cluster servers and service configuration are managed by cegedim.cloud. The product is available in ITCare in self-service.
Users have full access to the OpenSearch database and Dashboards. It is the users responsibility to manage the security of indexes and lifecycle.
OpenSearch is deployed as a cluster on-premise in our data centers.
The same level of service as the Compute offer is guaranteed : deployment of instances, maintenance in operational condition, flexibility, security and monitoring are thus ensured by our experts.
Sizing can be configured according to your needs.
The minimum number of node for a cluster is 3 servers but not recommended for production. It is advised to deploy at least 5 or more nodes for Production use.
Billing is processed monthly and based on the number of nodes plus additional costs for storage, backup, 24x7 monitoring.
Cost estimation for an OpenSearch cluster is available via your Technical Account Manager.
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Recommandations from the have been following in order to enforce, harden and secure our Linux operating systems.
For more information, please visit .
For more details about the High Availability topology, please follow this page .
Only will be officially maintained
For more information regarding the hardening of Kubernetes, please follow this page .
For more information regarding the persistant solution available for cegedim.cloud's Kubernetes clusters, please follow this page .
For more information, please visit .
Standard
1
1
2
2
4h
High Availability
3
2
3
4
0 - 5 min
Standard
High Availability
topology.kubernetes.io/region
Region
topology.kubernetes.io/zone
Availability Zone
kubernetes.io/hostname
FQDN of node
Rancher
2.9.2
Kubernetes
1.30
Ingress controllers
ingress-nginx 1.10.0, traefik 2.11.2, istio 1.20.3
Prometheus
2.42.0
Grafana
9.1.5
Helm
3.13.3
CSI Ceph
3.11.0
Node OS
Ubuntu 22.04
CNI - canal (Calico+Flannel)
3.28.1
Docker
27.1.2
Ceph Cluster
17.2.5
CSI Ceph
3.9.0
cgdm-rwo
use CSI Ceph rbd to provision ReadWriteOnce
persistent volumes
cgdm-rwo
Ceph RBD
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done to cegedim.cloud support team.
SSH or RDP access
SSH or RDP access is allowed and automatically provided to the requester of the virtual instance.
Start, stop, restart, delete and resize a Virtual Instance
Actions available in self-service in our cloud platform management tool ITCare.
Create, delete, restore a snapshot
Actions available in self-service in our cloud platform management tool ITCare.
Enable or disable Monitoring, 24x7, Backup and Data replication
Actions available in self-service in our cloud platform management tool ITCare.
Add or remove Monitoring downtime
Actions available in self-service in our cloud platform management tool ITCare.
Modify storage allocation
A request ticket is required to modify the storage allocation of a Virtual Instance.
Change configuration file
Some configuration files (such as repositories) will be enforced by cegedim.cloud. A request ticket is required to modify some components.
XS
Tracks 100,000 page views per month or less
S
Tracks 1 Millions page views per month or less
M
Tracks 10 Millions page views per month or less
L
Tracks 100 Millions page views per month or less
XL
Tracks more than 100 Millions page views per month
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done by cegedim.cloud support team.
Update Matomo
Update can be done by the super user.
Install and activate plugin(s)
Plugins can be installed by the super user from the web UI.
Manage user privileges
Super user can grant privileges to any users.
SSH access
SSH access is disabled and reserved to cegedim.cloud administrators.
Change configuration file
On request via ticket.
OpenSearch cluster is available as:
3 nodes cluster - not recommended for Production use
5 or more nodes cluster - recommended for Production use
In the 3 servers topology, all server are playing the master role, two of them are also used as data nodes. Each index are by default replicated on those two data nodes.
With 5 to more servers, three node are used as masters only nodes and don't host any data. Depending of the Area, master nodes are dispatched across 2 or 3 Availability Zones. The remaining nodes host only data and are spread over two Availability Zones.
In an Area with 3 Availability Zones, the cluster is resilient against one AZ failure.
In an Area with 2 Availability Zones, the cluster might fail if the Availability Zone containing two masters is not available.
This section lists which feature / capabilities are available to users, and how to request / perform them :
Self Service
Customer can perform action autonomously using ITCare.
On Request
Customer can request for the action to be done by cegedim.cloud support team.
SSH access
SSH access is disabled and reserved to cegedim.cloud administrators.
Change configuration file
On request via ticket.
Authentication uses OpenSearch internal security system.
It can be configured on request to accept Active Directory as an authentication backend.
Authorizations is done using RBAC.
It can be configured on request to accept Active Directory as a backend role provider.
TLS/SSL is activated by default for the incoming and internal network flows.
This section explains how the password management is handled:
admin account
ANY other account
kibana account
Used by the dashboard server to connect to the cluster
support account
Used by cegedim.cloud support team (it has limited access and cannot read index datas)
centreon account
Used by cegedim.cloud monitoring system (it has only access to monitoring information)
prometheus account
Used by cegedim.cloud metering system (it has only access to monitoring information)
To get started, go to ITCare and search for your target global service where you'll create your new Redis deployment.
Search for your Global Service in the top search bar and click on it to display its information page.
Once in your Global Service, click on the Create Resource button, select Redis and the required version.
Fill in the form:
Select a topology
Define the name of the future deployment
Sizing
Storage requirements for each instance
Target location
Target network
Management options (backup, monitoring, 24/7, remote site replication)
Click Next once all fields have been filled in.
In the customization step :
Enter the password for the administrator account to be provided
Select the required persistence options
Enable or disable TLS encryption
Then click on Next.
Passwords are not saved by cegedim.cloud. Be sure to save your password!
Review the summary before submitting the form.
Once the deployment is ready, you'll be notified by e-mail.
This code describes how to connect to Redis when the topology is a single instance. This code is deliberately simplified (errors are not handled), and is intended for demonstration purposes only.
The Python language is used. We assume that the Redis instance is named pcluredis01.hosting.cegedim.cloud.
This code describes how to connect to Redis in a cluster topology (with Sentinel). This is deliberately simplified (errors are not handled), and is intended for demonstration purposes only.
The Python language is used.
We assume that the Redis cluster is named redis-cluster with a "pclu" prefix. There are therefore 3 instances in this cluster:
pcluredis01.hosting.cegedim.cloud
pcluredis02.hosting.cegedim.cloud
pcluredis03.hosting.cegedim.cloud
Two samples are available, with and without TLS.
Supported operating systems
Linux
Windows
AIX
CPU (per instance)
2 - 16
RAM (per instance)
4 - 384 GB
Storage (per instance)
35 - 4096 GB
Backup
Monitoring
24x7 monitoring
Data replication (DRP)
Availability
99.8%
Custom region and availability zone
Self-service
Nodes
2 - 256
CPU (per node)
2 - 16
RAM (per node)
6 - 256 GB
Monitoring
24x7 Monitoring
Backup worker node
Backup ETCD
Every 2 hours with 7 days of retention
Backup Persistent Volumes
Data replication (DRP)
High availability
Availability
99.9%
Region selection
Self-service
Replication
x4
x4
Fault Tolerance: 1 AZ is DOWN
Fault Tolerance: 1 DC is DOWN
Provisioning new PV
Remount existing PV
Compatible with all K8S applications
Multi-mount (RWX)
Resizable
Snapshot
Fault Tolerance: loss of 1 AZ
Fault Tolerance: loss of 1 DC
Compatible with K8S 1.22+
Compatible with K8S 1.22-
Instances
3
5 (recommended)
CPU (per instance)
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
Supported version(s)
2.11.0
2.15.0
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.9%
Multi-AZ deployment
Self-service
Managed MariaDB
MariaDB is an open-source database management system created by the original developers of MySQL. MariaDB uses tables to store data in an organized way, and it is possible to interact with the database using SQL queries.
It supports concepts such as primary keys, indexes and relationships between tables. MariaDB is widely used for data storage and management in a variety of applications, from websites to enterprise systems. It is a popular choice due to its reliability, performance and open-source nature.
MariaDB is deployed on-premise in cegedim.cloud's data centers. The long term support (LTS) can be deployed in self-service using ITCare
The same level of service as the Compute offer is guaranteed : deployment of instances, maintenance in operational condition, flexibility, security and monitoring are thus ensured by our experts.
Two topologies are available in self-service :
Standalone (replica can be added on request)
Galera cluster (3 nodes)
Galera cluster is production ready with at least 3 nodes spread over all Availability Zones of a target Area.
Sizing can be configured according to your needs.
Instances
1
3
CPU (per node)
2 - 16 vCPU
2 - 16 vCPU
RAM (per node)
4 - 384 GB
4 - 384 GB
Supported version(s)
10.6
10.6
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Self-service
For more information, please visit MariaDB - Features.
Billing is processed monthly and based on the number of instances plus supplementary costs for storage, backup, 24x7 monitoring.
Cost estimation for a MariaDB instance is available via your Service Delivery Manager.
Rancher handle ITCare SSO authentication : the login / password is the same as ITCare.
Regarding your cluster location, Rancher is reachable with following URLs:
EB (Boulogne-Billancourt)
ET (Toulouse-Labège)
In ITCare, you can find your cluster URL in the cluster detail page :
Rancher will as for an authentication at first login : simply click on "Login with Keycloak"
Then you will be redirected to the standard login process :
Once logged in, you should have a screen listing all the clusters you have accesses to :
If the UI gets stuck on "Loading" after the logging in, please try to connect to:
If on first login you don't see your cluster in the cluster list you might want to logout and login again.
You can manage your UI preferences (dark theme, number of rows per table...) by setting up your user preferences. Please refer here to a full documentation:
In order to connect to the cluster using CLI, you have two options :
by regular remote kubectl
using rancher online kubectl
Both are available by getting to the "cluster" page in Rancher. There are two ways of doing that :
Once on the cluster homepage you can download the "Kubeconfig File":
Or just copy the content of "Kubeconfig File":
This configuration can be mixed with other kubectl configuration.
The authentication can be shared with any cluster managed by the same rancher instance.
Once on the cluster home page you can use the web cli by clicking on the below icon :
This should launch a web shell like this one :
Token management UI is accessible right beneath the user avatar :
There are two scopes :
no-scope : global scope : used to interact with global rancher API
cluster-scoped : token dedicated to access specific cluster
Token can have different lifecycles :
a token can have a unlimited lifespan, it will follow the lifecycle of the account attached to it
a specific lifetime
You can use ITCare to add or remove nodes to your cluster.
Rancher manages namespaces via project, which is a concept specifically existing only in Kubernetes clusters managed by Rancher.
Project is not a Kubernetes native resource.
By default, a Kubernetes cluster is provisioned with 2 projects:
System: containing core-component's namespaces like: kube-system, etc.
Default: containng the "default" namespace
Users are free to create more Projects if needed.
Basing on Project level, Rancher offers built-in automation like: access rights granting, network isolation, etc.
Users are very encouraged to classify namespace into a Project.
Switch to project view
Create a new namespace from project view
Insert a unique name, and fill other fields if needed, and click on "Create"
cegedim.cloud recommends and officially supports access rights managing via AD groups.
Only AD groups starting with G_EMEA_* and G_K8_* are known by Rancher.
By default, when a cluster is created:
Standard user role is given to the group G_K8_<CLUSTER_NAME>_USERS which contains the power users of the related Cloud
Admin role is given to the group G_K8_<CLUSTER_NAME>_ADMINS which is empty by default and can be populated with competent & certified users via ITCare ticket toward AD support team.
For instance, user user1@cegedim.com needs to have standard user access to cluster test-preprod, he needs to ask to add user1@cegedim.com to the AD group named G_K8_TEST_PREPROD_USERS.
When users create a new Project, as default owner, they are free to bind any role on any AD group in the scope of this project.
If the Rancher predefined roles cannot fullfill your needs, please contact admins of your cluster to configure a custom rolebinding or clusterrolebinding.
Project Level Rights Management
In this part we will assume that the rights are given to a group not to a user. In order to manage right on a project there two ways : The UI or the API.
One of the highest role you can assign is "Project Admin"
Edit the project that you are owner or are given to sufficient rights from the project creator.
Select the group and the role in the form.
Using the API it's pretty straight forward. You will first need some parameters :
To get the project ID, you can use the API explorer simply use the "View in API" button.
Getting Role ID
To get the role ID you might not be allowed to list the through the UI but you will get it through this API request :
Give credentials
Using your API token you can make a single POST request to create the role binding:
Kubernetes resource api versions can be deprecated and even removed when upgrading Kubernetes version.
Things can be broken if you have resources of which apiversion is removed in the new Kubernetes version.
To avoid this risk, one of the solutions is using "kubent" tool to check the compability.
Kubent detects deprecated objects of a Kubernetes cluster. You should migrate/modify detected resources before the Kubernetes version upgrading.
To install kubent:
To detect depricated objects that will be removed in the newer Kuberntes version:
An example of the output:
In this tutorial, if you cluster is planned for upgraded to Kubernetes version 1.22, you should migrate your ingress ressource named "toto" from api version networking.k8s.io/v1beta1
to networking.k8s.io/v1
before the upgrade.
This migration might imply modifying some extra fields of the resources. Please refer to the official documentation:
In this example we will configure log forwarding from a Kubernetes cluster to an Open Search Cluster.
The Open Search cluster is in this example my-cluster.es.cegedim.cloud
The Cluster Output name is my-cluster-output
In Rancher, under Logging > ClusterOutput and Logging > Output, edit YAML configuration and change this:
ClusterFlow/ClusterOutput come with a lot of troubles of sending logs to OpenSearch / ELK cluster : Conflict of expected value kind with the same key (e.g. value changed from "long" to "string" will be rejected by OpenSearch / ELK Cluster).
This can happen if you have Dynatrace activated.
Here are full examples for the spec of ClusterOutput/Output for ElasticSearch and OpenSearch
There are 2 options:
Migration to Flow/ClusterOutput : will push all namespaces logs to the same Open Search index
Migration to Flow/Output : will push separately namespaces logs to dedicated Open Search indexes
Recommendation is to migrate to "Flow/Output", and even possible having dedicated OpenSearch index for very important application.
Create a file with all namespaces:
Create K8s YAML files to configure logs on all namespaces:
Apply configuration:
Create a file with all namespaces:
Create K8s YAML files:
Apply configuration:
No big buffer should occur if everthing goes well. Let's check that:
Let's check the last 5 lines of fluentd's log:
Have a deep look into /fluentd/log/out
inside pod fluentd
, but most of the time the following will help
Easy to identify the pod that cause issue:
Please understand that the error is not in Kubernetes, it is the container that produces inconsistent log in json format. Then OpenSearch rejects the sent logs. Banzai will retry and sooner or later, overflow will arrive.
Sources:
One short term solutions can be picked below:
Remove the pod from Flow (pod exclusion) or disable entire Flow of related namespace
Clean up related index in ES server
Long term solution:
Adapt application to produce more consistent log
See if it is possible to configure ES to gently ignore, but not reject the whole package sent by Banzai
MariaDB Community Edition is available in two self-service topologies:
A Galera cluster based on 3 active/active instances
For more information on MariaDB architecture :
MariaDB instances are accessible on port 3306 by default.
MariaDB is available in the following cegedim.cloud data centers:
EB4 (Boulogne-Billancourt, France)
ET1 (Labège, France)
In some cases, when a third node is deployed (Galera cluster), nearby secondary data centers may also be used to ensure maximum resilience:
EB5 (Magny-les-Hameaux, France)
ET2 (Balma, France)
The MariaDB Platform as a Service is hosted on the Linux Debian 10 distribution.
Minimum system requirements are 2 CPUs and 4GB RAM.
Storage can be configured during provisioning and subsequently increased on request.
This is a single instance deployed in a single data center.
Two MariaDB instances are deployed in the same data center: the main node with a read-only replica.
Only deployed on request!
A MariaDB Galera cluster is a virtually synchronous multi-primary cluster for MariaDB. A cluster based on 3 active/active instances can be deployed in any cegedim.cloud data center.
The supported version is the latest LTS (long term support) currently available. For more information, please consult MariaDB versions.
This section is to list which feature / capabilities are available to customer, and how to request / perform them :
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done to cegedim.cloud support team.
SSH access
SSH access is disabled and reserved to cegedim.cloud administrators.
Change configuration file
On request via ticket. Review by the cegedim.cloud team.
Some default MariaDB parameters configured by cegedim.cloud :
transaction_isolation
READ-COMMITTED
Type of transaction
max_connections
1000
Max allowed connections to the instance
innodb_buffer_pool_size
50% of RAM
Innodb buffer memory allocated to the instance
slow queries
disabled
Slow queries are not logged by default
Backup policy for MariaDB is configured like so :
Full backup every weekend (online)
Differential backup every day (online)
Binlog backup every two hours
Default backup retention for the full backups and dependent backups is two weeks.
MariaDB's authentication is internal user based.
An administration user is provided to the customer when the provisioning is over.
The password of this user is not stored nor saved by cegedim.cloud. Please be sure to save it in your own vault.
As part of our Managed Databases offer, MariaDB is specifically monitored on top of the underlying system to ensure service uptime and performances.
The following key MariaDB indicators are monitored and tracked :
Number of aborted client connections
Number of failed server connection attempts
Number of refused connections due to internal server errors
Maximum number of simultaneous connections opened
Connect to ITCare, search for the Global Service to attach the cluster to and click on it.
Click Create resource on the left control panel and select OpenSearch.
Give the cluster a unique name and define a prefix name that will be used to name the virtual machines. Click Next.
Select number of nodes and node size. Click Next.
Select storage volume. Click Next.
Select the region and area to which you want to locate your cluster. Click Next.
Select VLAN you want to deploy your cluster into. Click Next.
Enable or disable additional options:
Virtual machines and clusters monitoring
24/7 monitoring
Backup
Virtual machines replication (Disaster recovery)
Click Next.
Select the version and define an administrator password that will be used to manage your cluster.
Passwords are not saved by cegedim.cloud. Make sure to save your password.
Verify settings, here you can:
Check virtual machine names to be created.
Save your administrator password.
Modify the management options.
Click Submit.
Once the cluster is ready, you will be notified by email with the information required to connect to the cluster.
On the left control panel, click the name of the cluster. The cluster page is displayed. At the top of the cluster page, click the Manage button, then Start and confirm.
An email notification will be sent when the service will be activated.
At the top of the OpenSearch cluster page, click the Manage button, then Stop.
Input an RFC number for tracking (optional). Click on Submit.
Stopping a cluster will stop all virtual machines attached to the cluster and monitoring will be disabled.
An email notification will be sent when the cluster is stopped.
At the top of the cluster page, click the Manage button, then Add Nodes.
Select the number of nodes you want to add (even number) and select the new size (cpu/ram). Specify the data disk size.
An email notification will be sent when all node are added.
At the top of the cluster page, click the Manage button, then Resize Nodes.
Select the nodes you want to resize and select the new size (cpu/ram)
An email notification will be sent when all node are resized.
At the top of the cluster page, click the Manage button, then Delete.
This action will stop and delete all virtual machines. All CI will be removed and disappear from your Global service.
Please note that this action is not recoverable.
Input an RFC number for tracking (optional) then click Submit.
An email notification will be sent when the cluster is removed.
Upgrading a cluster is not yet implemented in ITCare.
If you want to upgrade your existing cluster to a newer available version you must send a request to the support indicating which version you would like.
During the upgrade the OpenSearch cluster will continue to work correctly as we're doing a rolling upgrade. But you might see some short indisponibility during the upgrade of the dashboard servers which is done after the upgrade of the Elastic cluster.
The cluster topology upgrade is proposed for an Opensearch cluster initially consisting of three nodes (basic topology).
The upgrade is implemented in ITCare via "Manage - Migrate to dedicated master" option.
Migration allows to add two Master nodes, while specialising the existing (n-1) nodes to the Data role. The cluster will thus be composed of three Master nodes and the rest of the nodes dedicated to the Data role.
During the upgrade to a "Dedicated Master" topology, the cluster will continue to work, although it may temporarily enter a "Yellow" status. However, some cluster objects, such as dashboards, may be temporarily unavailable. Everything will return to normal once the migration is complete.
The functionality to add dedicated ingestion nodes is available for an OpenSearch cluster with a "dedicated master" topology (five or more nodes).
The deployment of dedicated ingestion nodes is implemented in ITCare through the "Manage - Add Ingest Nodes" option.
With these dedicated ingestion nodes, it is possible to isolate the data ingestion process, which helps minimize the impact on indexing or search performance, even when dealing with large volumes of incoming data. This improves the overall stability and performance of the cluster.
The multi-tier architecture (Hot-Warm-Cold) in OpenSearch allows for optimization of costs, performance, and scalability based on specific application needs. This architecture enables data storage based on access frequency, distributing frequently accessed data to identified nodes (Hot), moderately accessed data to identified nodes (Warm), and infrequently accessed data to identified nodes (Cold). In addition to attribute differences, the number of nodes at each level can also vary to meet specific search query needs.
At cegedim.cloud, to meet the needs of our customers, we offer two levels of the multi-tier architecture (Hot-Cold). Node attribute configuration is available in ITCare via the "Set node attributes" action, accessible at each node level.
In Opensearch, each node has a default shard limit of 1000 shards.
This limit is in place to help maintian the performance and stability of your cluster by controlling resource usage per node. Exceeding this limit, such as when creating new indexes, will result in error.
If you encounter this error, consider the following actions below to resolve it.
Adding additional nodes can distribute the shards across more resources, reducing the load per node. Each additional node will bring its own 1000 shards limit, effectively increasing your cluster's overall shard capacity.
Adjust the cluster.max_shards_per_node parameter to allow more shards per node if you have sufficient hardware resources (CPU, memory and storage). We recommend to not exceed 2000 shards per node to avoid overloading.
Use the following command to update the parameter:
When creating new indices, carefully plan the number of shards to match the index's data size and expected growth.
Avoid over-sharding by:
Using fewer, larger shards for smaller datasets.
Periodically reviewing shard allocation to ensure efficient resource usage.
You should regularly monitor shard and resource utilization in your cluster to ensure optimal performance.
A Grafana dashboard for OpenSearch is available for this effect in your Advanced Metrology platform.
You should ensure your nodes are equipped with sufficient resources before increasing shard limits.
Managed PostgreSQL
PostgreSQL is currently the leading open source RDBMS (Relational Database Management System), with a wide range of features and a large community supporting it.
cegedim.cloud provides fully managed PostgreSQL databases instances to let you build your applications without operating availability, security and resilience of PostgreSQL databases.
PostgreSQL is deployed on-premise in cegedim.cloud's data centers.
The same level of service as the Compute offer is guaranteed : deployment of instances, maintenance in operational condition, flexibility, security and monitoring are thus ensured by our experts.
Two types of PostgreSQL deployments are available :
Standalone Instance
High availability : two PostgreSQL instances with automatic fail-over for improved resiliency
Sizing can be configured according to your needs.
Instance
1
2
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Supported Versions
10, 11, 12, 13, 14, 15, 16
12, 13, 14, 15, 16
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Self-service
For more information, please visit PostgreSQL - Features.
Billing is processed monthly and based on the number of instances plus supplementary costs for storage, backup and 24x7 monitoring.
Cost estimation for a PostgreSQL instance is available via your Service Delivery Manager.
To get started, head over to ITCare and search your target Global Service where you will create your new PostgreSQL.
Search your Global service in the top search bar and click on it to display its information page.
Once inside your Global Service, click on the Create Resource button and then select PostgreSQL.
Go to Managed databases and select PostgreSQL and pick the required version.
Fill in the form then click Next. Select your customizations and click Next.
Review the synthesis before submitting the form.
Once the deployment is ready, you will be notified by email.
On the resource page of your PostgreSQL, you can take any action available by using the Manage button in the upper right corner. This includes, starting, stopping, deleting, rebooting, resizing and much more.
When your cluster is created with cegedim.cloud ITCare, you obtained an sql role with credentials.
With these credentials, you may connect to the cluster with its name on tcp port 5432. You may use postgres database to connect to.
If your cluster is named "mycluster", here is an example on how to connect using Python:
When your cluster is created with cegedim.cloud ItCare, you obtained an sql role named "admin" with credentials.
If you choose to activate TLS, you have received the root certificate you should trust to and give to the library you used to connect, for example psycopg2.
With these credentials, you may connect to the cluster with its name on tcp port 5432. You may use postgres database to connect to.
If your cluster is named "mycluster" here is an example on how to connect using Python:
It is safer than to not use an admin role for applications. Once connected, you may create a regular role as the following (replace <a_role> and <very_strong_password> with your own credentials)
if you want create a database whom owner will be the role you have just created, use the following SQL requests :
You may use the following SQL requests with template0 database as template database:
The PostgreSQL PaaS has a functionality allowing to restore a PostgreSQL PaaS (source) to another PostgreSQL PaaS (destination) at a given time (using Point-In-Time Recovery) under the following constraints:
the user must have access to the cloud of the source farm and the destination farm
the source must be backuped (option chosen during creation)
both source and destination must be active
the source and destination must be different
the source and destination must be in the same version of PostgreSQL
the source and destination must be in version 12 or higher
the target time must not be in the future (bounded on the right by the current time).
the time target must not be less than 7 days (for non-production services) or 14 days (for production services) from the current time (bounded on the left by the retention of backups)
You can choose to include or exclude the time target in the restoration process.
The process of restoring a PostgreSQL database is an important step. Let's see how to proceed below:
Currently supported versions of PostgreSQL are : 10, 11, 12, 13, 14, 15, 16.
To upgrade your PaaS PostgreSQL, please refer to this page: PostgreSQL - Upgrade
cegedim.cloud supports two types of PostgreSQL deployments :
Single Instance mode is providing a standard PostgreSQL instance
High Availability is providing a multi-instances PostgreSQL instance, with improved resilience and scalability capabilities
PostgreSQL is available on both cegedim.cloud's data center :
EB4 (Boulogne-Billancourt, France)
ET1 (Labège, France)
In some cases, when a second node is deployed (High availability), a secondary close by data center can also be used to ensure maximum resiliency :
EB5 (Magny-les-Hameaux, France)
ET2 (Balma, France)
For High availability topology the PaaS is built to be DC resilient if it is possible.
Following , a sample of nodes placement:
This section is to list which feature / capabilities are available to customer, and how to request / perform them :
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done to cegedim.cloud support team.
SSH access
SSH access is disabled and reserved to cegedim.cloud administrators.
Change configuration file
On request via ticket. Only possible if it doesn't affect monitoring and resilience.
Install extension
PostgreSQL extensions can now be installed in self service using ITCare provided your deployment is in version 15 or higher. Otherwise, request ticket still applies.
It's possible to add functionality to PostgreSQL through so-called extensions. These extensions can add new types, additional functions for administrators and "classic" users alike, or even complete applications.
Once the PostgreSQL PaaS has been provisioned, you can install some of these extensions through ITCare. Below is the list of extensions supported by PostgreSQL PaaS from version 15 onwards:
Please note that the installation of certain extensions may require a restart of PostgreSQL and therefore cause your PostgreSQL PaaS to be unavailable.
Customer is provided with a role whom he chooses the password.
The password of this user is not stored nor saved by cegedim.cloud. Please be sure to save it in your own vault.
The role provided to the customer has the following authorizations:
LOGIN
CREATEROLE
CREATEDB
So, the customer may create dedicated application role and databases.
Secured transport is an option while provisioning and is available only from version 13 and above.
If secured transport is selected, TLS/SSL will be enabled for the PostgreSQL protocol and only a TLS connection from the clients will be accepted.
All datas are stored in cegedim.cloud data centers on encrypted storage arrays.
This section list the password management :
dedicated customer account
SCRAM-SHA-256
ANY other account
SCRAM-SHA-256
cegedim.cloud account
SCRAM-SHA-256
monitoring account
SCRAM-SHA-256
If backup is enabled during provisioning (enabled by default for a Service of Production type), the following backup policies will apply :
Full dump every day retained for 14 days on Object Storage
Full backup once a week.
Differential backups in between.
Write-ahead (WAL) logs are archived.
Point-in-Time recovery is supported for 14 days on Object Storage
As part of our Managed Databases offer, PostgreSQL is specifically monitored on top of the underlying system to ensure service uptime and performances.
The following key PostgreSQL indicators are monitored and tracked :
Connections
Memory usage
Transaction id wrapparround
Health status
This guide will go through all the configuration items needed to improve the availability and service continuity of your applications deployed in a cegedim.cloud managed Kubernetes clusters.
This is a "must-read" guide in order that your Disaster Recovery Strategy meets cegedim.cloud compute topology.
Once your Kubernetes cluster configured to run using the High Availability (HA) topology, some configuration best practices are required to allow your applications :
to run simutenaously on all datacenters of the region
to have sufficient capacity in all datacenters in case of Disaster on one of them
As a reminder, the nodes of the Kubernetes clusters are distributed into 3 availability zones (AZ) and 2 datacenters :
AZ "A" and "B" are running on the primary datacenter
AZ "C" is running on the secondary datacenter
For stateless services that support scaling, best practice is to have at least 3 pods running :
Those at-least 3 pods needs to be properly configured to have at least one pod running on each Availability Zone:
We are using preferedDuringSchedulingIgnoredDuringExecution
and not requiredDuringSchedulingIgnoredDuringExecution
because we want this requirement to be "soft" : Kubernetes will then allow to schedule multiple pods on same AZ if you are running more replicas than AZs, or in case of failure of a zone.
In kube 1.20,failure-domain.beta.kubernetes.io/zone
will be deprecated, the new topology key will betopology.kubernetes.io/zone
If you are using the High Availability cluster topology, your objective is to deploy resilient applications in case of a datacenter failure.
This page describe some best practices to determine sizing of worker nodes for each Availability Zone where your workloads are running.
As a reminder, Kubernetes Cluster is deployed in 3 availability zones, and 2 datacenters. In the worst case scenario, only 1 AZ will run if the primary datacenter has a major disaster.
That's the hypothesis to take into account to determine the "nominal" sizing, that we can call "If the primary datacenter fails, how much CPU / RAM capacity do I need to keep my application working ?"
To determine this capacity, and then the worker nodes deployed in "C" Availability Zone (how many, and with which resources), you will need 3 parameters :
:
Minimum Business Continuity Objective (MBCO)
As RTO / RPO, MBCO is a major parameter to size your DRP.
To sum up, it is the percentage of capacity of your deployed application that is required to have your business up and running.
Depending on how did you size your workloads when running in 3 AZs, performance you determine as sufficient, it can be 30%, 50% or 100%.
For example, if you have an application with 3 replicas of 4GB RAM on each AZ, you can determine the MBCO really differently :
33%
having only one running during outage is sufficient, because performance will be OK
you can take the risk to not have redundancy during outage period
66%
either, 2 pods minimum is required to have a performance OK
and/or you don't want take the risk to fail if the only pod left fails
100%
you need absolutely 3 pods minimum to run the service with nominal performance
Choice is yours !
Pods Resources Requests
As Kubernetes will try to reschedule your pods in case of an outage, the requests is an absolutely major parameter to manage.
If the AZ-C has not enough resources to satisfy all requirements of desired pods deployments, Kubernetes will not deploy them, and maybe your applications won't be available !
To know about your requests, you can run this command :
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"-"}{range .spec.containers[*]}{"/"}{.name}{";"}{.resources.requests.cpu}{";"}{.resources.limits.cpu}{";"}{.resources.requests.memory}{";"}{.resources.limits.memory}{"\n"}{end}{"\n"}{end}'
| grep
-v
-e '^$'
Resources Usage
Determining requests is OK to be sure that Kubernetes will deploy as many pods as you want to, but what about real capacity your pods are using ? This is also to take in account to have the picture on how "raw" resources your applications require.
To determine that, you can run this command to know about your pod's current usage :
kubectl top pod
Then you have two choices to calculate sizing :
At the "Cluster Level" granularity: if you are just beginning the process and do not have such complexity or variability in your workloads, use this :
Determine a global MBCO cross-deployments
Summing all pods resources requests to get an unique number
Summing all pods resources usages to get an unique number
At the "Pod Level" granularity: If you want the sizing to be fitted perfectly and you have time to, take the time to determine those parameters for each deployment in your Kubernetes Cluster, because MBCO may vary ! For example :
A web application will require a MBCO with 100%
A cache will require a MBCO of 50%
A "nice-to-have" feature, or an internal tool can be 0%
The "Cluster Level" calculation is not accurate enough to be absolutely certain that cluster will be appropriately sized. Just know about it, and evaluate if it's worth taking the risk.
In any case, this sizing have to be reassessed regularly, depending on new deployments or rescaling you are running on your daily operations
If you have summed all requests and usage, and you've determined the MBCO on the "cluster" level, you can use this simple formula to calculate required sizing for AZ "C" in secondary datacenter :
If you've determined a per-deployment MBCO, you will have to calculate your sizing with a more complex formula :
Once you've calculated your MBCO, it is important to leverage Kubernetes capabilities (QoS, especially PodDisruptionBudget
) to make your deployment follow your decision.
Use ITCare or request help from our support to size your cluster.
During this phase, you'll need to prioritize your assets and define the components that are essential to the availability of your services.
To know your resource utilization, once deployed, it's a good idea to observe the resource consumption of your workload.
you can access your metrics via the rancher user interface or via a client like Lens.
In Kubernetes there are 3 classes of QOS:
Guaranteed
Burstable
BestEffort
For critical workloads, you can use "guaranteed" QOS, which simply sets resource limits equal to resource demands:
For less critical workloads, you can use the "Burstable" QOS, which will define resource limits and demands.
The pod disruption budget lets you configure your fault tolerance and the number of failures your application can withstand before becoming unavailable.
With a stateless workload, the aim is to have a minimum number of pods available at all times. To achieve this, you can define a simple pod disruption budget:
Avec une charge de travail à état, le but est d'avoir un nombre maximum de pods indisponibles à tout moment, afin de maintenir le quorum par exemple :
High traffic scenarios:
OpenSearch has several breaking changes, so you must verify your application compatibility using this link:
This is the major breaking change and it is not specific to OpenSearch as it was alreay planned by ElasticSearch before the fork
So you must be sure that your applications are not using anymore the "type" parameters.
Here are some solutions regarding products often use with elastic solutions and how to configure them to work with OpenSearch 2.x
If the client is Fluentbit, the easiest solution is to set the parameter Suppress_Type_Name to On.
It is also possible to change the output plugin to the opensearch native one which is part of Fluentbit since version 1.9.
The following article may prove useful for getting started with Fluentbit and OpenSearch:
If the client is Fluentd it's more tricky. There is also a suppress_type_name but the plugin is using this parameter only if detect an elastic version>=7.
So we need to add to more parameters:
verify_es_version_at_startup to false to not let the plugin detect the version
default_elasticsearch_version to '7'
Here are for exemple the change to be done on the spec of the output plugin we're using in Kubernetes
There is also an output plugin for OpenSearch.
The OpenSearch Plugin is not yet available in Rancher Logging System
Microsoft SQL Server is a relational database management system developed by Microsoft.
It is designed to store and retrieve data as requested by other software applications. The core features of SQL Server include:
Data storage and retrieval: SQL Server provides a secure and scalable platform to store a large amount of structured and semi-structured data efficiently.
Data querying and manipulation: It offers advanced querying capabilities, such as the ability to write complex queries using SQL language, join tables, create views, and retrieve data based on specific criteria.
Business intelligence and analytics: SQL Server provides tools and services for data analysis, reporting, and visualization, allowing users to gain insights from the stored data to make data-driven decisions.
Data security and integrity: It offers robust security features, like authentication, access control, and encryption, to protect sensitive data from unauthorized access or modifications.
High availability and scalability: SQL Server supports features like clustering, failover, and replication to ensure continuous availability of data and support for growing demands by scaling up or out the database infrastructure.
SQL Server is deployed on site in cegedim.cloud data centers.
The same level of service as the Compute offer is guaranteed: instance deployment, operational maintenance, flexibility, security and monitoring are all handled by our experts.
SQL Server 2016, 2017, 2019 and 2022 are available in self-service via our ITCare cloud management platform.
Two editions are supported: Standard and Enterprise.
Two topologies are available:
Stand-alone instance
Always On cluster
The Always On cluster topology is production-ready but is only available on demand. Only SQL Server 2022 Enterprise edition is available for self-service provisioning.
Sizing can be configured to suit your needs.
Instances
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
8 - 384 GB
8 - 384 GB
Supported Version(s)
2016
2017
2019
2022
2016
2017
2019
2022
Backup
Monitoring
24/7 Monitoring
Replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Self-service
For more information, please read SQL Server - Features.
Please specify if the operation is to be carried out outside of business hours in order to plan an RFC.
It is recommended that you upgrade your non-production environments first in order to estimate the downtime generated by the operation and to test your applications using the new engine version.
The upgrade of a PostgreSQL deployment (single-instance or high availability) takes place in two fully automated steps:
Update the Operating system first if required
Multiple updates depending on the scenario: Debian 9 → Debian 10 → Debian 11 -> Debian 12
Update of the PostgreSQL engine in the target version
Depending on the source and target versions of PostgreSQL, it may be necessary to first migrate the operating system to a version supported by cegedim.cloud (for more information, check OS / PostgreSQL support matrix).
The duration of an update is variable depending on:
The configured cpu and ram resources
The amount of data whose headers must be modified by the PostgreSQL engine.
The amount of data to be reindexed following a change of C library, after an OS update.
The amount of data on which to activate the checksum (data page checksum was activated since PaaS PostgreSQL 12 )
The amount of data to be vacuumed.
The amount of data to be backuped (a full backup is performed after migration process).
The backup mode:
Point-in-time Recovery (PITR) from PostgreSQL 12 and higher.
The "dump" backup mode disappears in favour of the "PITR" and is only used in versions of PostgreSQL lower than version 12.
Debian upgrade: 10 minutes on average
PostgreSQL reindexing: 5 minutes on average
PostgreSQL upgrading: 1 minute on average
PostgreSQL checksum: 3 minutes on average
PostgreSQL vacuuming: 1 minute on average
PostgreSQL full backup (PITR mode): 16 minutes on average
In PostgreSQL HA, we need to upgrade the replica too and synchronize this replica with the leader:
PostgreSQL synchronizing: 4 minutes on average
Total average duration for a 100GB database: 40 minutes
Linux distributions supported by cegedim.cloud depending on the PostgreSQL version:
PostgreSQL 10
Debian 9
PostgreSQL 11
Debian 10
PostgreSQL 12
Debian 10
PostgreSQL 13
Debian 11
PostgreSQL 14
Debian 11
PostgreSQL 15
Debian 11
PostgreSQL 16
Debian 12
If the operating system is updated, it may require a complete reindexing (also handled by cegedim.cloud) due to changes in the C library when the operating system is updated.
Depending on the amount of data, this operation may take some time.
Below are the update paths supported by cegedim.cloud:
PostgreSQL 10
Debian 9 → Debian 10
Debian 9 → Debian 10
Debian 9 → Debian 11
Debian 9 → Debian 11
PostgreSQL 11
Debian 9 → Debian 10
Debian 9 → Debian 11
Debian 9 → Debian 11
PostgreSQL 12
Debian 10 → Debian 11
Debian 10 → Debian 11
PostgreSQL 13
PostgreSQL 14
PostgreSQL 15
* An operating system upgrade is required
** Two operating system upgrades are required
*** Three operating system upgrades are required
The update of a Redis PaaS is the responsibility of cegedim.cloud and can be requested via a request ticket submitted from ITCare.
Please specify a time slot to execute the upgrade and if the operation is to be carried out outside of business hours.
It is recommended that you upgrade your non-production environments first in order to estimate the downtime generated by the operation and to test your applications using the new engine version.
The upgrade of a Redis deployment (single-instance or high availability cluster) takes place in two fully automated steps:
Update the operating system first if required
Multiple updates depending on the scenario: Debian 10 → Debian 11 → Debian 12
Update of the Redis and Sentinel engine in the specified target version
The duration of an update is variable and depends on:
The topology
Standalone topology: Redis will be upgraded.
Sentinel topology: Redis and Sentinel on all nodes will be upgraded.
The amount of operating system upgrade necessary
Debian operating system upgrade: 10 minutes on average
Redis package upgrade : 5 minutes on average
Sentinel package upgrade: 5 minutes on average
Linux distributions supported by cegedim.cloud depending on the Redis version:
6.2.x
Debian 10
6.2.x
Debian 12 (deployments created after May 31, 2024)
7.2.x
Debian 12
Redis is self-service deployable via our cloud platform management tool: ITCare.
Two topologies are available:
Standalone instance
Sentinel cluster
In both cases, you can choose whether or not to persist data on disk at the time of the creation request, see Persistence
Once deployed, the stand-alone instance can be accessed on listening port 6379.
The Redis Sentinel cluster is deployed on 3 instances distributed over all the Availability Zones of an Area.
Once deployed, the cluster is accessible on listening port 6379.
Each instance runs Redis and Sentinel processes
Sentinel listening port: 26379
Of the 3 instances, one is primary and the other two are replicas
Replicas are open read-only
Persistence refers to the writing of data to durable storage, such as a solid-state disk (SSD). Redis provides a range of persistence options. These include:
RDB (Redis Database): RDB persistence performs point-in-time snapshots of your dataset at specified intervals.
AOF (Append Only File): AOF persistence logs every write operation received by the server. These operations can then be replayed again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself.
No persistence: You can disable persistence completely. This is sometimes used when caching.
RDB + AOF: You can also combine both AOF and RDB in the same instance.
if RDB is enabled
save 3600 1
save 300 100
save 60 10000
if AOF is enabled
append fsync every sec
If the primary is down, a replica will be automatically promoted as the new primary and the other replica will be reconfigured automatically to follow the new master.
Sentinel will give you the master node and the replicas nodes.
This section is to list which feature / capabilities are available to customer, and how to request / perform them :
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done to cegedim.cloud support team.
SSH access
SSH access is disabled and reserved to cegedim.cloud administrators.
Redis/Sentinel access
Customer can log in with an account to Redis and Sentinel (password defined by customer in the provisioning wizard).
Change configuration file
On request via ticket.
bind
@IP 127.0.0.1
Listening address
timeout
300
Close the connection after a client is idle for N seconds (0 to disable)
logfile
/var/log/redis/redis-server.log
Log file path
supervised
systemd
Supervision interaction
If AOF persistence is active, the following parameters will be applied:
appendonly
yes
dir
/var/lib/redis/persistance
appendfsync
everysec
if RDB is active, the following parameters will be applied:
save
3600 1
save
300 100
save
60 10000
rdb_compression
yes
rdbchecksum
yes
dir
/var/lib/redis/persistance
The following kernel parameters have been modified to optimize operating system performance for Redis :
vm.overcommit_memory = 1
vm.swappiness = 1
net.core.somaxconn = 65535
The authentication mode used is internal: Redis 6 ACL.
Passwords are hashed with SHA-256 and do not appear in plain text in the ACL file.
Redis 6 ACLs are used to manage authorizations.
On Sentinel, the dedicated client account has rights to :
On Redis, the dedicated customer account has rights to :
The customer can choose whether or not to activate TLS transport when requesting self-service creation via ITCare.
This section describes password management:
customer account
SHA-256
ANY other account
SHA-256
cgdm_admin account
SHA-256
cgdm_monitor account
SHA-256
The following items are monitored and are accessible in ITCare.
DBS_REDIS_CLI_CLIENTS
Check connected clients count
DBS_REDIS_CLI_AOF_STATUS
Check aof status
DBS_REDIS_CLI_COMMANDS
Number of commands processed
DBS_REDIS_CLI_CONNECTIONS
Number of connections
DBS_REDIS_CLI_CPU
CPU usage
DBS_REDIS_CLI_MEMORY
Memory usage
DBS_REDIS_CLI_REPL_REPLICAS_COUNT
Check replicas count
DBS_REDIS_CLI_RDB_STATUS
RDB status
DBS_REDIS_SENTINEL_MASTER_UP
Checks the status of the master from Sentinel
DBS_REDIS_SENTINEL_SLAVES_COUNT
Check replicas count from Sentinel
DBS_REDIS_SENTINEL_SENTINELS_COUNT
Check Sentinelscount
DBS_REDIS_SENTINEL_QUORUM
Check quorum status
TLS_REDIS_CERT_EXPIRATION
Check Redis certificate expiration
TLS_SENTINEL_CERT_EXPIRATION
Check Sentinel certificate expiration
Redis which stands for Remote Dictionary Server, is a fast, open source, in-memory, key-value data store.
Redis is deployed on site in cegedim.cloud data centers.
cegedim.cloud guarantees the same level of service as the Compute offer: instance deployment, operational maintenance, flexibility, security and monitoring are all provided by our experts.
Two topologies are available:
Standalone instance
Sentinel cluster of 3 instances
Sizing can be configured to suit your needs.
Instance(s)
1
3
CPU (per instance)
2 - 16 vCPU
2 - 16 vCPU
RAM (per instance)
4 - 384 GB
4 - 384 GB
Supported version(s)
6.2
7.2
6.2
7.2
TLS/SSL
Monitoring
24x7 Monitoring
Backup
Data replication (DRP)
Availability
99.8%
99.9%
Multi-AZ deployment
Self-service
The Sentinel cluster topology is production-ready, with 3 instances distributed across all the Availability Zones in a target Area.
Each instance runs the Redis and Sentinel processes. One instance is primary and the other two are replicas.
For more information, please visit Redis - Features.
Billing is processed monthly and based on the number of nodes, plus additional costs for storage, backup and 24/7 monitoring.
Cost estimates for Redis are available via your Service Delivery Manager.
To get started, connect to ITCare and search your target Global Service where you will create your new SQL Server. Once inside your Global Service, click on the Create Resource button in the top right corner.
Go to Managed databases and select SQL Server:
Pick the desired version and edition:
Name: Specify the new name for the SQL Server virtual machine.
In Always On Cluster mode, specify the name of the Availability Group.
Prefix: Provide a prefix to initialize the virtual machines in the cluster.
Sizing: Select a sizing for your instance. Default value and lowest sizing is 2 CPUs / 4 GB RAM.
Storage: Select the storage capacity required for SQL Server. Five disks are required.
Default and minimum storage is 30 GB for root instance and user datas disks, 10 GB for user log and tempdb disks.
Localization: Select the Region you want to deploy to. Pick an Area in this Region and finally select an Availability zone.
Network: Select the VLAN you want to deploy into. Ideally your backend VLAN.
Authentication: Select the authentication domain you want to deploy into.
Management: Activate management options.
Enable or disable Monitoring
Enable of disable 24/7 Monitoring
Enable backup of your virtual machine
Enable Replication of your virtual machines (on Disaster recovery site)
Provide the administrator password that you will use for your SQL Server administrator user.
cegedim.cloud will NOT save this password so please save it somewhere safe in your vault.
Confirm your password.
Choose your SQL collation.
Add key technologies available in SQL Server : SSIS, SSAS, SSRS
You can add a specific request before submission but it will delay the automated provisioning.
Click Next when done.
This page will resume your inputs, please check everything is correct before submitting. You can display and save your administrator password.
Once reviewed and verified, click Submit.
Once the deployment is ready, you will be notified by email. Provisioning can take up to 1 hours based on the current load on automation.
At the top of the resource page, click on the Manage button, then on Start and confirm.
An e-mail notification will be sent when the service is activated.
At the top of the resource page, click on the Manage button, then on Stop. Enter an RFC number for tracking (optional). Click on Submit.
Shutting down a cluster will stop all virtual machines attached to the cluster, and monitoring will be disabled.
An e-mail notification will be sent when the cluster is shut down.
At the top of the resource page, click on the Manage button, then on Resize. Select the new size (CPU / RAM).
An e-mail notification will be sent when all nodes have been resized.
At the top of the cluster page, click on the Manage button, then on Delete. This will stop and delete all virtual machines.
Please note that this action is not recoverable!
Enter an RFC number for tracking (optional), then click Submit.
An e-mail notification will be sent when the deployment is deleted.
Two topologies are available:
Standalone Instance
Always On Cluster
The Always On cluster configuration is based on a 3-node topology:
Two nodes located on the same site.
These nodes are configured to share the load or automatically failover in case of a failure.
An anti-affinity rule ensures that the active nodes do not coexist on the same hypervisor host, thus enhancing resilience.
Located on a secondary site to ensure disaster recovery (DR).
This node does not handle any active requests and is reserved exclusively for failover in the event of active node failure.
The passive node is subject to strict restrictions to comply with Microsoft License Mobility with Failover Rights:
No active workload: The passive node cannot execute SQL queries, reports, or user workloads.
Allowed operations:
Database consistency checks.
Full backups and transaction log backups.
Performance and resource monitoring.
Optimized licensing: With Software Assurance, the use of the passive node is included at no additional cost, provided these restrictions are followed.
Fault tolerance: Synchronous replication ensures that data is available in real-time on active nodes.
Disaster recovery: Deploying a passive node on a secondary site enhances security and business continuity.
Simplified maintenance: Planned failovers allow updates or technical interventions without service interruption.
Specific monitoring tailored for the Always On cluster is in place to:
Ensure compliance with restrictions related to the passive node.
Monitor performance and automatic failovers.
Prevent risks of non-compliance with licensing rules.
SQL Server is available on both cegedim.cloud's data centers:
EB4 - Boulogne-Billancourt, France
ET1 - Labège, France
As part of the Always On topology, an inactive node is automatically deployed in a nearby secondary site to enhance the resilience of the cluster:
EB5 (Magny-les-Hameaux, France)
ET2 (Balma, France)
Filesystem layout:
Due to prefixes applied to Active Directory objects, the name of the virtual machine provisioned is restricted to 13 characters maximum for a cegedim.cloud PaaS SQL Server.
Ports listing:
Only the SQL Server listener and SQL Server Browser ports are opened inbound in the Windows Firewall by default and enforced through a GPO on the Organization unit.
List of modules installed by default during provisioning:
Database engine
Replication
Full-text Search
Client tools connectivity
SDK
This section is to list which feature / capabilities are available to customer, and how to request / perform them:
The SQL Server PaaS runs exclusively in a Windows environment. The standard system login method is RDP (Remote Desktop Protocol).
In order to connect to the virtual machine, you need to have the required privileges either at the domain level or at the local machine level.
Authentication is configured by default in mixed mode which provides two login types:
SQL Server login: instance level
Active directory user: domain level - Embedded Windows authentication
Instance login is available locally or remotely:
Locally: once connected in RDP, launch the local SQL Server Management Studio
Remotely: launch the SQL Server Management Studio and specify the target instance
SSMS can use the Windows user credentials you're already logged with through RDP to login to the SQL Server instance.
Authentication with an SQL login is also possible locally.
Specify a target instance in the server name field enforcing the tcp protocol: tcp:HOSTNAME\INSTANCENAME
Just select "SQL Server Authentication" and provide the SQL Login with the associated password.
Authorizations for cegedim.cloud teams are managed by GPO.
This section list the password management for the SQL Server PaaS:
Authorizations for customers are managed by the customers itself.
The customer that request a SQL Server instance through ITCare will be automatically granted to connect on the instance. He can grant access to any Active Directory user or group afterwards.
Patchs are installed during "Patch parties" managed by cegedim.cloud every quarter.
An instance can be patched manually exceptionally if security or bug fixes requires it.
Datas for cegedim.cloud's SQL Server PaaS are stored on the dedicated virtual machines created upon requesting a PaaS.
These virtual machines and the storage associated are hosted and managed in cegedim.cloud's own data centers.
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
cegedim.cloud uses as the platform management.
if you dont have kubectl
we highly suggest you to install kubectl
on your administration workstation, following .
Kubent might fail to retrieve some information, e.g. namespace of the ingress, feel free to fire an issue for the editor:
A standalone instance (with a replica on request)
For more information on node attribute configuration, please refer to
Option
Option
Option
Option
Option
Option
Option
Option
Some of these extensions are developed within the PostgreSQL project itself, so they keep pace with the evolution of the various PostgreSQL versions. You can find a list here. Others are developed by third-party companies and follow their own pace, like or , to name but the best-known.
Pods are deployed using Kubernetes .
Option
Option
Option
Option
Option
Option
Option
Option
The update of a PostgreSQL PaaS is the responsibility of cegedim.cloud and can be requested via a submitted from ITCare, specifying a time slot for the operation.
As an average guideline, durations for each steps of an upgrade in place of a 100 GB database:
*
*
**
**
** Debian 9 → Debian 11
*** Debian 9 → Debian 12
*
**
**
** Debian 9 → Debian 11
*** Debian 9 → Debian 12
*
*
* Debian 10 → Debian 11
** Debian 10 → Debian 12
* Debian 11 → Debian 12
* Debian 11 → Debian 12
* Debian 11 → Debian 12
It provides built-in replication, different levels of on-disk persistence, and provides high availability with .
A exists.
Option
Option
Option
Option
Option
Option
Option
Option
Option
Option
Virtual
2022
Windows Server 2022
Standard or Enterprise
Virtual
2019
Windows Server 2019
Standard or Enterprise
Virtual
2017
Windows Server 2019
Standard or Enterprise
Virtual
2016
Windows Server 2016
Standard or Enterprise
D:\
MSSQL
30 GB
Root instance
E:\
MSSQL_USER_DATA
30 GB
User databases
F:\
MSSQL_USER_LOG
10 GB
User databases log
G:\
MSSQL_TEMPDB
10 GB
TempDB
1433
Server static port listener
TCP
1434
SQL Server Browser
UDP
2382
SQL Server Analysis Services Browser
UDP
2383
SQL Server Analysis Services listener
TCP
5022
SQL Server BDM/AG Endpoint
TCP
Self Service
Customer can perform action autonomously.
On Request
Customer can request for the action to be done to cegedim.cloud support team.
Database Collation
Integration Services
Analysis Services
Reporting Services
Full-Text Search
Export, Import SQL Server backup
Create Always On cluster
Available exclusively for SQL Server 2022 Enterprise edition, consult your service delivery manager for guidance
admin account
ANY other account
cgdm_admin account
monitoring account
This method allows to delete a Maria DB instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /mariadb/123
Id, example: 123
This method allows to delete an OpenSearch instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /opensearch/123
id, example: 123
Names, example: resource01,!resource02,resource42
Types, example: WINDOWS,AIX,LINUX
Families, example: DEBIAN,CENTOS,RHEL
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter List for Restore
Database Version, example: 11
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Topology, example: AlwaysOn, Galera, Replica Set, Cluster, Standalone, HA, etc..
Version, example: 2.11.0, 2022 EE, etc...
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
This method allows to delete a matomo instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /analytics/matomo/123
id, example: 123
Names, example: resource01,!resource02,resource42
Types, example: TOMCAT or WILDFLY or WEB_ZONE
Families, example: DEBIAN,CENTOS,RHEL
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Availability Zone, example: EB-A, EB-B, EB-C, etc...
IPs, example: 10.59.13.29
VLAN, example: EB_1125_DMZ8
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Names, example: resource01,!resource02,resource42
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: agkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Topology, example: Standard, HA
Version, example: v1.26.15, v1.28.13, etc...
Region, example: EB,ET
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
K8sCluster Id, example: 123
type, example: az-distribution | dc-distribution
This method allows to delete a Kubernetes cluster node.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
To delete a Kubernetes node, the id of the Kubernetes cluster and the id of the node to delete must be specified.
To list the nodes of the Kubernetes cluster, use the endpoint : GET /containers/kubernetes/{kuberneteId}/nodes/{nodeId}.
Use the following to delete a node of a Kubernetes cluster.
DELETE /containers/kubernetes/1234/nodes/4567
To keep consistency on the Kubernetes cluster, please note that :
All nodes cannot be deleted.
All ingress nodes cannot be deleted.
API users will have a BAD_REQUEST
when trying to break one of the rule above.
id, example: 123
id, example: 456
Names, example: resource01,!resource02,resource42
Types, example: WINDOWS,AIX,LINUX
Families, example: DEBIAN,CENTOS,RHEL
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Availability Zone, example: EB-A, EB-B, EB-C, etc...
IPs, example: 10.59.13.29
VLAN, example: EB_1125_DMZ8
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Resource Id, example: 123
Snapshot Id, example: 123-snap-42
Service Id, example: 1234
availabilityZone
policyType, example: SERVER
backupReplicated
Platform, example: cent7, ubu22
Storage specification of platform (disks / max sizes...)
Resource profiles (CPU/RAM) that can be allocated to instances.
Properties specification of platform (package / script, backup type...)
Resource Id, example: 1234
Service Id, example: 56789
Platform, example: Debian 8
Support Phases, example: STANDARD,EXTENDED
Start Date (ISO8601 format), example: 2022-07-22T00:00:00.000Z
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Returns list of platforms, their current support phase and milestones concerning their phases of support.
Platform name, example: PaaS OpenSearch
Allows to list resources available in your cloud.
This endpoint could be used as a main entry to get informations about the other kind of resource types.
When the resource is retrieved, the *path*
attribut allows to navigate to the right category of the resource.
For example : *path=/compute/containers/kubernetes*
tells to check the sections *compute > containers > kubernetes*
for more details.
IDs, example: 123,456,789
Names, example: resource01,!resource02,resource42
Types, example: WINDOWS,AIX,LINUX
Families, example: DEBIAN,CENTOS,RHEL
Versions, example: DEBIAN_10,CENTOS_6,RHEL_5
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Service Ids, example: 1234
Environment, example: PRODUCTION,QA
Technologies of resource. For exemple LINUX,KUBERNETES
Count statistic with uncategorized types
global|count|obsolescence|service|network, example: global
Start Date (ISO8601 format), example: 2022-07-22T00:00:00.000Z
Resource Id, example: 123
Resource/Node Ids, example: 123
[]
Operation Type, example: available|available-nodes|in-progress|in-progress-nodes|list-actions-in-progress|list-available-actions
available
Gets a compute Resource by its Id.
A Resource
is the ITCare base object.
A resource is composed of :
*id*
: Unique identifier of the resource*name*
: Name of the resource*serviceId*
: Each resource must be linked to a service . The service is a logical entity that hosts resources per environment, application ...*environment*
: The environment of the resource. Can be for example 'PRODUCTION', 'QA', 'RECETTE_UAT', 'DEV' ...*creationUser*
: The creator of the resource*creationTime*
: When the resource has been created*comment*
: Description of the resource*category*
: High level categorization of the resource*family*
: Family of the resource belonging the category*status*
: status of the resource. It can be 'ACTIVE' (running), 'INACTIVE' (stopped)*resourceType*
: Type of the resource*cloudId*
: Each cloud*cloudName*
: name of the related cloud*path*
: Helper that gives a path or location about which category to find the resource for more details operationsWhen the resource is retrieved, the *path*
attribut allows to navigate to the right category of the resource.
Resource Id, example: 123
Retrieves all operations and events that occurs on a resource : start, stop, monitoring operations ...
Resource Id, example: 123
Actions, example: enable_monitoring
Statuses, example: SUCCESS
Names, example: resource01,!resource02,resource42
Environments, example: PRODUCTION,DEVELOPMENT
Statuses, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Results page you want to retrieve (0..N)
Number of records per page.
Category, example: INSTANCES
""
Service Id, example: 44411
Names, example: REBMYAPP01,REBMYSRV
Backup
Drp
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Service Id, example: 44411
Broker name, example: deblaprmq01
Broker status, example: ACTIVE
Broker version, example: 3.9
Broker size, example: 4cpu8gb
Service Id, example: 44411
Names, example: PET1
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Service Id, example: 44411
Names, example: REBMYAPP01,REBMYSRV
Families, example: DEBIAN,RHEL
Backup
Drp
withManagedNodes
withApplicationServers
withOracleDbs
withMongoNodeJs
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Service Id, example: 44411
Names, example: REBMYAPP01,REBMYSRV
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Versions, example: EB,ET,NK
Service Id, example: 44411
Names, example: www.cegedim.com,www.egypt.eg
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Number of Members, example: 2
Service Id, example: 44411
Names, example: www.cegedim.com,www.egypt.eg
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Service Id, example: 44411
Names, example: PET1
Statuses, example: ACTIVE,INACTIVE,PREPARATION
This method allows to update the partch party informations related to the given service.
Structure of payload is generic and describes :
an array
containing the patch party configuration to apply for each resource of the given serviceUpdate Patch Party Statuses
PATCH /services/1234/patch-policies
[
{
"resourceId": 500079802,
"excluded": false,
"exclusionReason": "I don't want to include this resource"
},
{
"resourceId": 500079545,
"excluded": true,
"patchGroup": "2"
},
{
"resourceId": 500057033,
"excluded": false,
"exclusionReason": "Wrong patch group is set",
"patchGroup": "1c"
},
{
"resourceId": 500057055,
"excluded": false,
"patchGroup": "1"
}
]
[
{
"status": "FAILED",
"message": "The patch group is only allowed when the farm has one member",
"id": -1,
"internalId": 500057055
},
{
"status": "IN_PROGRESS",
"message": "Include PatchParty SQLServer rhutsql20",
"process": "INCLUDE_PATCHPARTY",
"id": 500079545,
"lastUpdatedAt": "2023-11-16T11:53:42.888+00:00"
},
{
"status": "FAILED",
"message": "Wrong patch party group set",
"id": -1,
"internalId": 500057033
},
{
"id": 202
}
]
There are 3 groups available defining the sequence on which the instance should be updated: 1 (First Group), 2 (Second Group) or 3 (Third Group).
If no group is set, it means that you have no preference while defining the sequences.
Service Id, example: 44411
boolean flag to fetch history details for every ci
Service Id, example: 44411
Names, example: REBMYAPP01,REBMYSRV
Categories, example: INSTANCES,APPLICATION_SERVERS,LOAD_BALANCERS
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Service Id, example: 44411
Names, example: devvcaglfs02
Statuses, example: ACTIVE,INACTIVE,PREPARATION
Sizing of the resource, example: 2cpu4gb
Number of nodes, example: 2
IP Address, ex: 10.10.10.10
Service Id, example: 500063721
Actions, example: enable_monitoring
Statuses, example: SUCCESS
Names, example: REBITTEST01
This method allows to delete a postgre SQL instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /postgresql/123
id, example: 123
Names, example: resource01,!resource02,resource42
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
This method allows to delete a redis instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /redis/123
id, example: 123
Names, example: resource01,!resource02,resource42
Types, example: WINDOWS,AIX,LINUX
Families, example: DEBIAN,CENTOS,RHEL
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Topology, example: Cluster, Standalone, etc..
Version, example: 2.7.0, 3.6.0, 3.9.29-1, etc...
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
This method allows to delete an Apache Kafka platform.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /message-brokers/apache-kafka/123
id, example: 123
This method allows to delete a RabbitMQ broker instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /message-brokers/rabbitmq/123
id, example: 123
By default,The user's clouds are used to filter the final output
Names, example: resource01,!resource02,resource42
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
URLs, example: .cegedim.cloud
IRules, iRule-Redirect-gis-workflow
Default Persistence, example: cookie,hash, or source_addr etc...
Fallback Persistence, example: dest_addr, source_addr, etc...
Load Balancing Mode, example: least-connections-node, round-robin, etc...
Protocols, example: HTTP, HTTPS, MYSQL, etc...
VLAN, example: EB_1125_DMZ8
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
id, example: 500067154
From Date (ISO8601 format), example: 2023-03-15T00:00:00.000Z
To Date (ISO8601 format), example: 2023-03-16T00:00:00.000Z
type, example: security
security
criteria, example: bot
bot
size, example: 20
20
This method allows to delete a URL of Load Balancer.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /loadbalancers/123/urls/456
Load Balancer Id, example: 123
Load Balancer Url Id, example: 123
Resource Id, example: 123
Resource/Node Ids, example: 123
[]
LB/URL Ids, example: 123
[]
Resource Ids : urls or members, example: 123
[]
Operation Type, example: available|available-nodes|in-progress|in-progress-nodes|list-actions-in-progress|list-available-actions
available
Filter, example: resource01,!resource02,resource42
publicIp
Environments, example: QA
Scopes, example: frontend , backend
Regions, example: EB,NK
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
This method returns the list of accessible networks from a Load balancing Zone.
scope
query parameter to filter private
, interco
, internet
networks. for 'frontend' and 'backend' networks, use scope=private
environment
query parameter to filter production
, non_production
networks. for 'production' networks, use environment=production
onlyNonFull
if you want only networks with available IP addresses to be listed.clouds
parameter (comma-separated list of long) to restrict results to specified Clouds IDs (use the /me to obtain the list of your Clouds).Names, example: resource01,!resource02,resource42
Environments, example: PRODUCTION,DEVELOPMENT
Status, example: ACTIVE,INACTIVE
Tags, example: mytagkey:mytagvalue,application:itcare
Filter list by monitoring status
Filter list by monitoring on call status
Filter list by backup status
Filter list by DRP status
Filter list by patch party status
Topology, example: Cluster
Version, example: v1.26.15, v1.28.13, etc...
VirtualIp, example: 127.0.0.1, 127.0, 127, 10.%.62
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
Names, example: overdrive1
Results page you want to retrieve (0..N)
Number of records per page.
Sorting criteria in the format: property(,asc|desc). Default sort order is ascending. Multiple sort criteria is not supported.
This method allows to delete an Overdrive instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /storage/overdrive/123
id, example: 123
EB
EB-INT
""
A low-latency network area, in which we can create load balancers to address several availabilty-zones within this area.
Use param withAvailabilityZones=true
to retrieve Availability Zones of the region.
Region Name, example: EB
Include Areas
false
A low-latency network area, in which we can create load balancers to address several availabilty-zones within this area.
Use param withAvailabilityZones=true
to retrieve Availability Zones of the region.
Region Name, example: EB
Platform Id, example: deb10
Include Availability Zone
true
This method returns the list of accessible networks from an Availability Zone.
You can use scope
query parameter to filter frontend/backend networks,
and onlyNonFull
if you want only networks with available IP addresses to be listed
Region Name, example: EB
Area Name, example: EB-QA
AZ Name, example: EB-QA-A
This method returns the list of accessible authentication domains within a network.
Region Name, example: EB
Area Name, example: EB-QA
AZ Name, example: EB-QA-A
Network Id, example: 123
A healthcheck is a test performed on a loadbalanced url to retrieve the status of the service Use the clouds
parameter (comma-separated list of long) to restrict results to specified Clouds IDs (use the /me to obtain the list of your Clouds).
Region Name, example: EB
Area Name, example: EB-QA
This method returns the list of accessible networks from a Load balancing Zone.
scope
query parameter to filter private
, interco
, internet
networks. for 'frontend' and 'backend' networks, use scope=private
environment
query parameter to filter production
, non_production
networks. for 'production' networks, use environment=production
onlyNonFull
if you want only networks with available IP addresses to be listed.clouds
parameter (comma-separated list of long) to restrict results to specified Clouds IDs (use the /me to obtain the list of your Clouds).Region Name, example: EB
Area Name, example: EB-QA
This method returns the list of accessible networks from an Area.
scope
filter by network scopeonlyNonFull
if you want only networks with available IP addresses to be listedRegion Name, example: EB
Area Name, example: EB-QA
Event environments
ALL,NON_PROD,PROD
Event types
CUSTOMER,MAINTENANCE_SLOT
Maintenance types
SWITCH,NETWORK,PATCH_PARTY
Start Date (ISO8601 format)
2022-04-01T22:00:00.000Z
End Date (ISO8601 format)
2022-04-30T22:00:00.000Z
This method allows to create a Maria DB instance.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.nodeSizing
attribute). Ex: 2cpu2gbdiskSize
attribute). The possible values are at least 40 and maximum 1024 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.Maria DBVersion
attribute). Example: 13serviceId
attribute).networkId
attribute).instanceCount
attribute). Minimum 1 and maximum 3topology
attribute). Either single or clusteroptional fields:
az
attribute).tlsEnabled
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /mariadb
{
"version": "10.6",
"region" : "EB",
"area": "EB-QA",
"az": "az",
"name": "Test123",
"nodeSizing": "2cpu2gb",
"diskSize": 40,
"networkId": 1234511,
"serviceId": 46922,
"admPassword": "Test123@2022",
"instanceCount": 1,
"topology" : "SINGLE"
}
The admin password
^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Availability zone of the maria DB
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The storage needed on each data node of the maria DB
Number of instances to create for mariadb
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of maria DB
[a-z0-9\-]{4,60}$
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Version of maria DB cluster
This method allows to update a MariaDB instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start MariaDB instance
Use the start
operation to start a MariaDB instance.
Starts kafka instance.
This method is synchronous (status code 202
).
Example :
PATCH /mariadb/1234
{
"operation": "start"
}
Stop MariaDB instance
Use the stop
operation to stop the nodes of the MariaDB instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /compute/databases/mariadb/1234
{
"operation":"stop"
}
Resize MariaDB instance
Use the resize
operation to resize the sizing of the MariaDB instance.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /compute/databases/mariadb/1234
{
"operation":"resize",
"options" : {
"sizing" : "2cpu4gb"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the MariaDB instance.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /mariadb/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan for the MariaDB instance.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /mariadb/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /mariadb/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
Id, example: 123
This method allows to create an OpenSearch instance.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.nodeSizing
attribute). Ex: 2cpu4gbdiskSize
attribute). The possible values are at least 40 and maximum 1024 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.clusterVersion
attribute). Example: 1.2.3serviceId
attribute).networkId
attribute).instanceCount
attribute). Must be odd and at least be 3, recommended is 5, Maximum is 51nodePrefix
attribute). 4 to 60 uppercase charactersThis method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /opensearch
{
"clusterVersion": "1.2.3",
"region" : "EB",
"area": "EB-QA",
"az": "az",
"name": "Test123",
"nodeSizing": "2cpu4gb",
"diskSize": 40,
"networkId": 1234511,
"serviceId": 46922,
"admPassword": "Test123@2022",
"instanceCount": "3",
"nodePrefix" : "OPESTC"
}
The admin password
^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The storage needed on each data node of the ELS cluster
Number of instances to create in ELS cluster
[13579]$
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of els cluster
[a-z0-9_\-]{4,60}$
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Prefix of the node names for els cluster
[A-Z0-9-.]{4,60}$
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
This method allows to update an OpenSearch instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start OpenSearch instance
Use the start
operation to start an OpenSearch instance.
Starts OpenSearch instance.
This method is synchronous (status code 202
).
Example :
PATCH /opensearch/1234
{
"operation": "start"
}
Stop OpenSearch instance
Use the stop
operation to stop the nodes of the OpenSearch instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /opensearch/1234
{
"operation":"stop"
}
Add nodes the OpenSearch instance
Use the add_nodes
operation to add nodes to the OpenSearch instance.
nodesCount
must be even
This method is synchronous (status code 202
).
PATCH /opensearch/1234
{
"operation":"add_nodes",
"options" : {
"diskSize" : 10,
"nodeSize" : "2cpu4gb",
"nodesCount": 2
}
}
Resize OpenSearch instance
Use the resize_nodes
operation to resize the sizing of the OpenSearch nodes.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /opensearch/1234
{
"operation":"resize_nodes",
"options" : {
"nodeSize" : "2cpu4gb",
"nodes" : ["node1"]
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /opensearch/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the OpenSearch instance.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /opensearch/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /opensearch/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
id, example: 123
This method allows to create a matomo instance.
You will have to know at the minimum :
region
attribute)name
attribute)sizing
attribute). The possible values are : XS (Up to 100K ppm), S(Up to 1M ppm), M(Up to 10M ppm), L(Up to 100M ppm), XL(More than 100M ppm)password
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.serviceId
attribute).defaultWebSiteName
, defaultWebUri
).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /analytics/matomo
{
"name": "pmatomo01",
"region": "EB",
"sizing":"XS",
"serviceId":"123",
"password": "Password!!??",
"defaultWebSiteName": "ITCare",
"defaultWebUri": "https://itcare.cegedim.cloud"
}
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
Default website name to be configured in Matomo, if left empty, a dummy value will be configured
Default website url to be configured in Matomo, if left empty, a dummy value will be configured
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of Matomo Instance
[a-z0-9\-]{4,60}$
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Password to connect to the matomo instance.
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Sizing. L, XL , XS, S, M. Sizing for matomo instances
This method allows to update a matomo instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start matomo instance
Use the start
operation to start a matomo instance.
Starts matomo instance.
This method is synchronous (status code 202
).
Example :
PATCH /analytics/matomo/1234
{
"operation": "start"
}
Stop matomo instance
Use the stop
operation to stop the nodes of the matomo instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /analytics/matomo/1234
{
"operation":"stop"
}
Extend matomo instance
Use the extend
operation to stop the nodes of the matomo instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /analytics/matomo/1234
{
"operation":"extend",
"options" : {
"sizing" : "M"
}
}
id, example: 123
This method allows to create a cluster.
You will have to know at the minimum :
area
attribute)networkId
attribute)name
attribute)nodeSizing
attribute)instanceCount
attribute)serviceId
attribute)This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /clusters
{
"name": "PCLUSTER01",
"area": "EB",
"networkId": "ED145",
"serviceId": 123,
"nodeSizing":, "1cpu2gb"
"instanceCount": 2
}
Describes the k8s container to be created.
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
Kubernetes Container Ingress Providers
Number of instances to create in k8s cluster
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of k8s cluster
[a-z0-9\-]+
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
This method allows to delete a K8s Cluster Container.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /k8s-clusters/123
DELETE /k8s-clusters/123
{
"changeReference": "rfc nunmber 456"
}
id, example: 123
Parameters when deleting a resource
Optional reference for change
This method allows to update a cluster.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details.Below are different operations currently implemented.
Create Nodes
Use the create_nodes
operation to create the nodes of a cluster.
Create nodes operation will add the new nodes in the cluster by availability zone. You can specify the availability zone you need in the request.
This method is synchronous (status code 202
).
Example :
PATCH /containers/kubernetes/1234
{
"operation": "create_nodes",
"options": {
"nodes": [
{
"nodesNb": 1,
"nodeSizing": "2cpu4gb",
"az": "EB-A"
},
{
"nodesNb": 2,
"nodeSizing": "4cpu8gb",
"az": "EB-B"
}
]
}
}
Delete Nodes
Use the delete_nodes
operation to delete the nodes of a cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /containers/kubernetes/1234
{
"operation":"delete_nodes",
"options":{
"nodes": ["11112","11113","11114"]
}
}
Enable High Availability - HA
Use the enable_ha
operation to enable the HA of a cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /containers/kubernetes/1234
{
"operation":"enable_ha"
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /containers/kubernetes/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the cluster.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /containers/kubernetes/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /containers/kubernetes/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
Upgrade
Use the upgrade
operation to upgrade the cluster.
version
option to set the target version to be installed.The requirements are :
/compute/platform/products?type=KUBERNETES
to list all versions available for the cluster.This method is synchronous (status code 202
).
PATCH /containers/kubernetes/1234
{
"operation": "upgrade",
"options": {
"version": "1.24"
}
}
K8s Cluster Id, example: 123
K8sCluster Id, example: 123
Describes a load balancer.
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
certificate of the load balancer., example: wildcard_cegedim.com
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
healtcheck of load balancer., example: http
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Network id. Refer to networks available in List Networks method. If absent, a default network of AZ will be used.
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
port member of load balancer., example: 80, 443, ...
profile name of load balancer.
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Indicates if a DNS record is to be set. If absent, set to false.
ssl profile of the load balancer., example: profile_wildcard.cegedim.com_secure
url of load balancer. Must be unique, and fit naming rules convention., example: url.cegedim.com
^(https?:\\/\\/)?(www\\.)?[a-zA-Z][a-zA-Z0-9.-]{2,63}+$
port of load balancer in case of TCP VS Profile
This method allows to create an instance.
You will have to know at the minimum :
region
attribute)platform
attribute)name
attribute)resourceId
attribute)serviceId
attribute)This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /instances
{
"name": "PINSTANCE01",
"region": "EB4",
"serviceId": 13,
"platform": "deb8",
"resourceId": "1cpu2gb"
}
This will create a Debian 8 machine (1cpu 2gb RAM) in EB4 region, named PINSTANCE01, and put it into service of ID 13.
By setting only these parameters, ITCare will use default profile of image (disk configuration) and choose most appropriate Availability Zone and network to host your instance. If you want to specify those parameters take a look at other examples in this documentation.
Response :
{
"id": "1333",
"status": "IN_PROGRESS"
}
With some python code, you can create instance and wait for completion like this:
instance = {
"name": "PINSTANCE01",
"region": "EB4",
"serviceId": 13,
"platform": "deb8",
"resourceId": "1cpu2gb"
}
action = itcare.post('/api/instances', payload=instance)
while action['status']=='IN_PROGRESS':
time.sleep(1)
action = itcare.get('/api/actions/{}'.format(action['id']))
print action['status']
Choose target Platform and properties
You'll have to know which platform you want to create, and so use Platforms methods to properly fill in relevant attributes (disks / custom properties / allocated resources...).
Choose Availability Zone and Network
You may want to choose your availability zone and network, you can do this by adding availabilityZone
and networkId
parameters to your request.
To discover both availability zones and networks, you can use methods Regions, AZ, and Networks.
Describes the instance to be created.
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
authentication domain id, if not set, will take default, example: CGDM-EMEA
Availability zone id. Refer to AZ available in List Availability Zones method. If absent, default AZ of region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
Indicates if backup off site (data replicated to another region) has to be setup on instance. If absent, backup off site will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
hostname of instance. Must be unique, and fit naming rules convention., example: PEB4MYAPP01
Network id. Refer to networks available in List Networks method. If absent, a default network of AZ will be used.
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
id of platform (image) of instance. To discover available platforms, use ListPlatforms method, example: deb8 for Debian 8
code of product., example: rmq11 for RabbitMQ 11
Custom properties to set up on instance such as security enforcement ... . Depends on which platform you choose to create (for some of them, properties are mandatory). Refer to platform specification to find out.
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
identifier of resources (cpu/ram) that will be allocated to the instance. Use List Platforms method to see resources available for each of them., example: 1cpu2gb
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
specific request to be done by an administrator. Can differ delivery of instance up to 24h., example: Could you please install .NET framework 4.5 on instance ? Thanks.
Volumes to setup on instance. If absent, will be set to defaults.
This method allows to delete an instance.
Instance has to be in INACTIVE
status, meaning that you have to stop it before deleting it. Use Update Instance
PATCH method with stop
operation prior to this deletion.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
Example (no body required) :
DELETE /instances/1233
With additional change reference :
DELETE /instances/1233
{
"changeReference": "RFC_123"
}
id, example: 123
Parameters when deleting a resource
Optional reference for change
This method allows to update an instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
to pass to operation to have the operation performed.Below are different operations currently implemented.
Stop an Instance
Use the stop
operation to perform the stop of instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
Use this method only if instance is running and is in the ACTIVE
state. Otherwise a 400
status error code will be returned.
PATCH /instances/1234
{
"operation": "stop"
}
You can also put an optional changeReference
if you want ITCare keep a reference to external change management system :
PATCH /instances/1234
{
"operation": "stop",
"options": {
"changeReference": "RFC_123"
}
}
Start an Instance
Use the start
operation to perform the start of instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
Use this method only if instance is not running and is in the INACTIVE
state. Otherwise a 400
status error code will be returned.
PATCH /instances/1234
{
"operation": "start"
}
You can also put an optional changeReference
if you want ITCare keep a reference to external change management system :
PATCH /instances/1234
{
"operation": "start",
"options": {
"changeReference": "RFC_123"
}
}
Reset an Instance
Use the reset
operation to perform the reset of instance.
Reset operation will perform a hard reset of instance, like power off/power on.
This operation may result in data loss, your applications and services will not be stopped gracefully.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
Use this method only if instance is running and is in the ACTIVE
state. Otherwise a 400
status error code will be returned.
PATCH /instances/1234
{
"operation": "reset"
}
You can also put an optional comment
that will be display in monitoring system :
PATCH /instances/1234
{
"operation": "reset",
"options": {
"comment": "Reset instance because OS is freezed"
}
}
Resize an Instance
Use the resize
operation to perform the resize of instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
Use this method only if instance is not running and is in the INACTIVE
or ACTIVE
state. Otherwise a 400
status error code will be returned.
PATCH /instances/1234
{
"operation": "resize",
"options": {
"sizing": "2cpu4gb",
"changeReference": ""
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the instance.
Use the state
option to turn on/off monitoring.
Use the alerting
option to turn on/off alerting. When alerting is desactivated, no incident will be handled when the ressource is has alerts.
This method is asynchronous (status code 202
).
PATCH /instances/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"alerting": false
}
}
Update Patch Party Statuses
Use the operation update_patch_party
to manage the patch party settings of your instances.
2 options are available :
PATCH /instances/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this App. by myself"
}
}
}
There are 3 groups available defining the sequence on which the instance should be updated: 1 (First Group), 2 (Second Group) or 3 (Third Group).
If no group is set, it means that you have no preference while defining the sequences.
PATCH /instances/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": 3
}
}
}
Replication management
Use the operation update_storage_replication
to manage the replication settings of your instances.
2 options are available :
PATCH /instances/1234
{
"operation": "update_storage_replication",
"options": {
"state": false
}
}
PATCH /instances/1234
{
"operation": "update_storage_replication",
"options": {
"state": true
}
}
PATCH /instances/1234
{
"operation": "update_storage_replication",
"options": {
"state": true,
"deactivationReason": "I want it ..."
}
}
Instance Backup Management
Use the update_backup
operation to enable/disable instance backup.
Requirements to update manage backup are :
2 options are available:
PATCH /instances/1234
{
"operation": "update_backup",
"options": {
"state": true
}
}
PATCH /instances/1234
{
"operation": "update_backup",
"options": {
"state": false,
"deactivationReason": "Because.."
}
}
id, example: 123
Resource Id, example: 123
Optional change reference
Snapshot description
Tags allows you to qualify your resources with a custom set of key-value pairs. Tags will be accessible using ITCare search.
123
Simple key/value object to put on resources (services, instances, loadbalancers) to be able to search across resources easily, and to benefit dynamic resource groups.
Key of tag
Value of tag
No Content
Tags allows you to qualify your resources with a custom set of key-value pairs. Tags will be accessible using ITCare search.
123
Simple key/value object to put on resources (services, instances, loadbalancers) to be able to search across resources easily, and to benefit dynamic resource groups.
Key of tag
Value of tag
Tags allows you to qualify your resources with a custom set of key-value pairs. Tags will be accessible using ITCare search.
123
Simple key/value object to put on resources (services, instances, loadbalancers) to be able to search across resources easily, and to benefit dynamic resource groups.
Key of tag
Value of tag
Service Id, example: 44411
Update Patch party configuration for resources of a service
This method allows to create a postgre SQL instance.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.nodeSizing
attribute). Ex: 2cpu2gbdiskSize
attribute). The possible values are at least 40 and maximum 1024 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.postgreVersion
attribute). Example: 13serviceId
attribute).networkId
attribute).topology
attribute). Either standalone / HAtrigram
attribute).allowedReplicationLag
attribute). The minimum size is 1 MB and maximum is 10240 MBHA topology extra fields: These fields are required for HA clusters:
nodePrefix
attribute). The prefix should be from 5 to 12 characters and can contain any uppercase character.az
attribute).trigram
attribute).tls
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /postgresql
{
"serviceId" : 123,
"nodeSizing" : "2cpu4gb",
"networkId" : 132,
"area" : "EB-QA",
"diskSize" : 40,
"admPassword" : "Test123@2022",
"postgreVersion" : "13",
"allowedReplicationLag" : 10,
"az" : "az",
"topology" : "STANDALONE",
"trigram" : "tri",
"tls" : true,
"name" : "NEWPOSTGRE01"
}
This method allows to update a PostgreSQL instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start PostgreSQL instance
Use the start
operation to start a PostgreSQL instance.
Starts PostgreSQL instance.
This method is asynchronous (status code 202
).
Example :
PATCH /postgresql/1234
{
"operation": "start"
}
Stop PostgreSQL instance
Use the stop
operation to stop the nodes of the PostgreSQL instance and the instance itself.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
).
PATCH /postgresql/1234
{
"operation":"stop"
}
Resize PostgreSQL instance
Use the resize
operation to resize the nodes of the PostgreSQL instance and the instance itself.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
).
PATCH /postgresql/1234
{
"operation":"resize",
"options": {
"sizing": "2cpu4gb"
}
}
Restore PostgreSQL instance
Use the restore
operation to restore a PostgreSQL instance to another PostgreSQL instance with the same farm version.
The available stop
options are BEFORE
and AFTER
.
This method is asynchronous (status code 202
).
PATCH /postgresql/5678
{
"operation": "restore",
"options": {
"sourceId": 1234,
"stop": "BEFORE",
"timestamp": "2022-11-02T09:32:02.000+00:00"
}
}
Convert from Standalone to HA PostgreSQL instance
Use the enable_ha
operation to convert a PostgreSQL instance from Standalone to HA mode.
This method is asynchronous (status code 202
).
PATCH /postgresql/5678
{
"operation": "enable_ha",
"options": {
"replicationLag": 50,
"changeReference": "000"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is asynchronous (status code 202
).
PATCH /postgresql/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the cluster.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /postgresql/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /postgresql/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
Install PostgreSQL extension
Use the install_extension
operation to install an extension in the PostgreSQL.
This method is asynchronous (status code 202
).
PATCH /postgresql/1234
{
"operation":"install_extension",
"options": {
"dbname": "mydb",
"extensions_list": [
{"name": "ext1"},
{"name": "ext2"}
]
}
}
Upgrade PostgreSQL version
Use the upgrade
operation to upgrade the PostgreSQL version.
This method is asynchronous (status code 202
).
PATCH /postgresql/1234
{
"operation":"upgrade",
"options": {
"targetVersion": "12",
"changeReference": "RFC 1234"
}
}
id, example: 123
This method allows to create a SQL Server 2022 platform.
You will have to know at the minimum :
name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.volumes
attribute). Initially, 5 disks are allocated and you can create one more. The first disk with id disk0
represents the "C: System", its maximum possible value is 70 and minimum is 1 (representing GB). disk1
represents "D: Root Instance", disk2
represents "E: User Databases", disk 3
represents "F: User Log" and disk4
represents "G: TempDB". The maximum possible value for these disks is 4096 and the minimum is 10 (representing GB).customerPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 8.serviceId
attribute).networkId
attribute).area
attribute).collation
attribute).edition
attribute) whether it's "STD" or "ENT"This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
optional fields:
az
attribute) default az of area will be used if not providedauthenticationDomainId
attribute)alwaysOn
attribute) default is falsessis
attribute) default is falsessrs
attribute) default is falsessas
attribute) default is falseasServerModeStd
attribute) which will be considered only if ssas
is true
asCollation
attribute) which will be considered only if ssas
is true
fullText
attribute) default is falseavailabilityMode
attribute)failoverMode
attribute)readableSecondary
attribute)witness
attribute)listenerName
attribute)POST /sqlserver
{
"name":"RSQL22",
"nodeSizing":"2cpu8gb",
"volumes":[
{
"id":"disk3",
"sizeGb":10
},
{
"id":"disk4",
"sizeGb":10
},
{
"id":"disk2",
"sizeGb":30
},
{
"id":"disk1",
"sizeGb":30
},
{
"id":"disk0",
"sizeGb":70
}
],
"area":"EB-QA",
"customerPassword":"P@ssw0rd",
"collation":"French_BIN",
"edition":"STD",
"serviceId":2423,
"networkId":5000802
}
Cluster/Basic Always On, example: true
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Collation for Analysis Services
Modelisation type
Modelisation type, example: TABULAR
authentication domain id, example: CGDM-EMEA
Cluster availability mode, example: Synchronous_commit
Availability zone of SQL Server
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
Database Collation, example: French_BIN
Customer Password
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
SQL Server edition, example: ENT
Cluster failover mode, example: Read-intent_only
Whether Full-Text search is enabled or not
Cluster listener name, example: rhusqllsnr01
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of SQL Server
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
cluster nodes number, example: 3
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Prefix name of SQL Server
Cluster readable secondary, example: YES, NO, READ_ONLY
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
specific request to be done by an administrator. Can differ delivery of instance up to 24h., example: Could you please install .NET framework 4.5 on instance ? Thanks.
SQL Server Analysis Services (SSAS)
SQL Server Integration Services (SSIS)
SQL Server Reporting Services (SSRS)
Volumes to setup on instance. If absent, will be set to defaults.
This method allows to delete a SQL Server instance.
This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
DELETE /compute/databases/sqlserver/1234
DELETE /compute/databases/sqlserver/1234
{
"changeReference": "56789"
}
id, example: 123
Parameters when deleting a resource
Optional reference for change
This method allows to update a SQL Server Farm.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start SQL Server Farm
Use the start
operation to start the SQL Server Farm.
This method is synchronous (status code 202
).
Example :
PATCH /compute/databases/sqlserver/1234
{
"operation": "start"
}
Stop SQL Server Farm
Use the stop
operation to stop the SQL Server Farm.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /compute/databases/sqlserver/1234
{
"operation": "stop"
}
PATCH /compute/databases/sqlserver/1234
{
"operation": "stop",
"options": {
"changeReference": "56789"
}
}
Reset SQLServer Farm
Use the reset
operation to reset the SQL Server Farm.
This method is synchronous (status code 202
).
Example :
PATCH /compute/databases/sqlserver/1234
{
"operation": "reset"
}
Resize SQLServer instance
Use the resize
operation to resize the nodes of the SQLServer instance and the instance itself.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
).
PATCH /compute/databases/sqlserver/1234
{
"operation":"resize",
"options": {
"sizing": "2cpu4gb"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the SQL Server.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /compute/databases/sqlserver/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the SQLServer.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /compute/databases/sqlserver/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /compute/databases/sqlserver/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
id, example: 123
Object describing a partial modification of an object to perform. Please refer to documentation to get list of operations available and their specific payload.
Operation to perform on target object, example: operation_name
Specific payload to pass to have the operation performed. Refer to documentation for each operation.
This method allows to create a redis instance.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.nodeSizing
attribute). Ex: 2cpu2gbdiskSize
attribute). The possible values are at least 40 and maximum 1024 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.redisVersion
attribute). Example: 6.2.5serviceId
attribute).networkId
attribute).instanceCount
attribute). Minimum 1 and maximum 3persistenceMode
attribute).optional fields:
az
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /redis
{
"redisVersion": "6.2.5",
"region" : "EB",
"area": "EB-QA",
"az": "az",
"name": "Test123",
"nodeSizing": "2cpu2gb",
"diskSize": 40,
"networkId": 1234511,
"serviceId": 46922,
"admPassword": "Test123@2022",
"instanceCount": 1,
"persistenceMode" : "PERSISTENT"
}
The admin password
^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Availability zone of the maria DB
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The storage needed on each data node of the maria DB
Number of instances to create for mariadb
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of Redis DB
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Prefix of the virtual machine names for Redis DB
This method allows to update a Redis instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start Redis instance
Use the start
operation to start a Redis instance.
Starts kafka instance.
This method is synchronous (status code 202
).
Example :
PATCH /redis/1234
{
"operation": "start"
}
Stop Redis instance
Use the stop
operation to stop the nodes of the Redis instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /redis/1234
{
"operation":"stop"
}
Resize Redis instance
Use the resize
operation to resize the nodes of the Redis instance and the instance itself.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
).
PATCH /redis/1234
{
"operation":"resize",
"options": {
"sizing": "2cpu4gb"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /redis/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the cluster.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /redis/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /redis/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
id, example: 123
This method allows to create an Apache Kafka platform.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters or numbers (5-60). It must not be the keyword 'cluster'.nodePrefix
attribute). The prefix should be from 5 to 12 characters and can contain any uppercase character.brokerCount
attribute). The possible values are at least 3 and maximum 5.diskSize
attribute). The possible values are at least 40 and maximum 1024 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.kafkaVersion
attribute). Example: 2.7.0serviceId
attribute).networkId
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /message-brokers/apache-kafka
{
"kafkaVersion": "2.7.0",
"area": "EB-QA",
"name": "testkafka",
"nodePrefix": "DM123",
"nodeSizing": "2cpu4gb",
"brokerCount": 3,
"diskSize": 40,
"networkId": 1234511,
"serviceId": 46922,
"admPassword": "Test123@2022"
}
The admin password
^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
Number of brokers to create in Kafka cluster
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The storage needed on each data node of the ELS cluster
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of Kafka cluster
(?!cluster$)([a-z0-9_]{5,60})$
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Prefix of the node names for Kafka cluster
[A-Z0-9]{5,12}$
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
This method allows to update an Apache Kafka platform.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start Apache Kafka instance
Use the start
operation to start an Apache Kafka instance.
Starts Apache Kafka instance.
This method is synchronous (status code 202
).
Example :
PATCH /message-brokers/apache-kafka/1234
{
"operation": "start"
}
Stop Apache Kafka instance
Use the stop
operation to stop the nodes of the Apache Kafka instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /message-brokers/apache-kafka/1234
{
"operation":"stop"
}
Resize Apache Kafka instance
Use the resize
operation to resize the sizing of the Apache Kafka instance.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /message-brokers/apache-kafka/1234
{
"operation":"resize",
"options" : {
"sizing" : "2cpu4gb"
}
}
Reconfigure Apache Kafka instance
Use the reconfigure
operation to update the Apache Kafka instance params.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /message-brokers/apache-kafka/1234
{
"operation":"reconfigure",
"options" : {
"param" : "param"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the Apache Kafka instance.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /message-brokers/apache-kafka/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the Apache Kafka instance.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /message-brokers/apache-kafka/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /message-brokers/apache-kafka/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
id, example: 123
This method allows to create a RabbitMQ instance.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute).vmPrefix
attribute).brokerCount
attribute). The possible values are at least 1 or 3 and maximum 5.diskSize
attribute). The possible values are at least 10 and maximum 2048 (representing GB).admPassword
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.rabbitMqVersion
attribute). Example: 3.8.9serviceId
attribute).networkId
attribute).Optional parameters that might be helpful:
vmPrefix
attribute).az
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /message-brokers/rabbitmq
{
"rabbitMqVersion": "3.8.9",
"area": "EB-QA",
"name": "Test123",
"nodeSizing": "2cpu4gb",
"brokerCount": 3,
"diskSize": 40,
"networkId": 1234511,
"serviceId": 46922,
"admPassword": "Test123@2022",
"vmPrefix" : "AAAAAA"
}
The admin password
^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Availability zone of the RabbitMQ Broker
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
Number of brokers to create in RabbitMQ Broker
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The storage needed on each data node of the RabbitMQ Broker
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of RabbitMQ Broker
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Prefix of the virtual machine names for RabbitMQ Broker (Clusters)
This method allows to update a RabbitMQ instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start RabbitMQ instance
Use the start
operation to start a RabbitMQ instance.
Starts RabbitMQ instance.
This method is synchronous (status code 202
).
Example :
PATCH /message-brokers/rabbitmq/1234
{
"operation": "start"
}
Stop RabbitMQ instance
Use the stop
operation to stop the nodes of the RabbitMQ instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /message-brokers/rabbitmq/1234
{
"operation":"stop"
}
Resize RabbitMQ instance
Use the resize
operation to resize the sizing of the RabbitMQ.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /message-brokers/rabbitmq/1234
{
"operation":"resize",
"options" : {
"sizing" : "2cpu4gb"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /message-brokers/rabbitmq/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the broker.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /message-brokers/rabbitmq/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /message-brokers/rabbitmq/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
id, example: 123
This method allows to create a LoadBalancer.
You will have to know at the minimum :
the area of the region where you want to host your cluster (area
attribute). Areas can be available in List Regions method.
url (url
attribute). The url you want to create and respect URLs naming convention.
network ID of the cluster (networkId
attribute).
On which service the LoadBalancer belongs to (serviceId
attribute).
On which domain the url should be belong to (domain
attribute).
Healthcheck to check that your url is responding (healthcheck
attribute).
Persistence configuration (persistence
attribute).
Port member : port on which the members of the loadbalancer should be listening to (portMembers
attribute). Example: 80
Profile Names (profileName
attribute). Ex : HTTP, HTTPS, TCP. For HTTP, profileName
= 80.
Redirection rules (redirectToHttps
attribute). Redirect to HTTPS or not.
Members (members
attribute). Members of the loadbalancer
optional fields:
region
attribute).setUpDNSEnabled
attribute). If True, the domain must support the DNS creation.
If the attribut is set to True and the domain do not support DNS setup, an error 400 will be raised.networkId
attribute). If not set, the system will choose the default network available on the Availability Zone.This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /loadbalancers
{
"url": "url.cegedim.com",
"serviceId": 46922,
"area": "EB-QA",
"networkId": 4242,
"healthcheck":"CDGM",
"persistence": true,
"portMembers": 80,
"profileName": "HTTP",
"redirectToHttps":false,
"setUpDNSEnabled":false,
"members": [
{
"id": 42,
"network": {
"id": 42,
"ipAddress" : "1.2.3.4"
}
}
]
}
When the LoadBalancer supports SSL
POST /loadbalancers
{
"url": "url.cegedim.com",
"serviceId": 46922,
"area": "EB-QA",
"networkId": 4242,
"healthcheck":"CDGM",
"persistence": true,
"portMembers": 80,
"profileName": "HTTPS",
"redirectToHttps":true,
"setUpDNSEnabled":false,
"sslProfile":"my_ssl_profle",
"certificateName":"my_cert.crt",
"members": [
{
"id": 42,
"network": {
"id": 42,
"ip" : "1.2.3.4"
}
}
]
}
Describes a load balancer.
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
certificate of the load balancer., example: wildcard_cegedim.com
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
healtcheck of load balancer., example: http
Members of pool to setup on load balancer.
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Network id. Refer to networks available in List Networks method. If absent, a default network of AZ will be used.
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
port member of load balancer., example: 80, 443, ...
profile name of load balancer.
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Indicates if a DNS record is to be set. If absent, set to false.
ssl profile of the load balancer., example: profile_wildcard.cegedim.com_secure
url of load balancer. Must be unique, and fit naming rules convention., example: url.cegedim.com
^(https?:\\/\\/)?(www\\.)?[a-zA-Z][a-zA-Z0-9.-]{2,63}+$
port of load balancer in case of TCP VS Profile
This method allows to update a load balancer.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details.Below are different operations currently implemented.
Start Load Balancer
Use the start
operation to start the load balancer.
This method is synchronous (status code 202
).
Example :
PATCH /loadbalancers/1234
{
"operation": "start",
"options": {
"changeReference": "5678"
}
}
Stop Load Balancer
Use the stop
operation to stop the load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "stop",
"options": {
"changeReference": "5678"
}
}
Create Bot Defense for Load Balancer
Use the activate_bot
operation to Update Security Profile for load balancer.
Use the template
with values strict
, standard
to set the template to be applied. Default template value is standard
.
Use the mode
with values transparent
, blocking
to set the mode to be applied. Mode is optional and default mode is blocking
.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "activate_bot",
"options": {
"changeReference": "5678",
"template": "strict",
"mode" : "blocking"
}
}
Update Bot Defense for Load Balancer
Use the update_bot
operation to Update Security Profile for load balancer.
Use the template
with values strict
, standard
to set the template to be applied. Default template value is standard
.
Use the mode
with values transparent
, blocking
to set the mode to be applied. Mode is optional and default mode is blocking
.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "update_bot",
"options": {
"changeReference": "5678",
"template": "strict",
"mode" : "blocking"
}
}
When the Security Profile is applied, Use the mode
with values transparent
, blocking
to set the mode to be applied. Mode is optional and default mode is blocking
.
In transparent
mode, requests considered to be malicious generate an alarm but are not blocked.
blocking
mode blocks the requests identified as malicious by Bot Defense
PATCH /loadbalancers/1234
{
"operation": "update_bot",
"options": {
"changeReference": "5678",
"mode" : "transparent"
}
}
Delete Bot Defense Security Profile from Load Balancer
When Security Profile is activated on a Load Balancer, the attribut botDefenseEnabled
on the PATCH /loadbalancers/1234
is true.
To remove the Bot Defense Security Profile from a Load Balancer, use :
Use the delete_bot
operation to remove Security Profile from the load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "delete_bot",
"options": {
"changeReference": "5678"
}
}
Update IP to whitelist for Load Balancer
Use the edit_bot_whitelist
operation to update/add IP to whitelist for load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "edit_bot_whitelist",
"options": {
"ip": "10.0.3.40",
"changeReference": "5678"
}
}
Remove IP Address from whitelist for Load Balancer
Use the delete_bot_whitelist
operation to remove IP from whitelist for load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "delete_bot_whitelist",
"options": {
"ip": "10.0.3.40",
"changeReference": "5678"
}
}
** changeReference
(optional) is the RFC Number if available.
Update Monitoring for Load Balancer and its URLs
Use the update_monitoring
operation to update monitoring status for load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
url1
and url4
form its list of URLs.PATCH /loadbalancers/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true,
"updateUrls": [
"url1",
"url4"
]
}
}
Load Balancer Id, example: 123
Object describing a partial modification of an object to perform. Please refer to documentation to get list of operations available and their specific payload.
Operation to perform on target object, example: operation_name
Specific payload to pass to have the operation performed. Refer to documentation for each operation.
Add a member to an existing loadbalancer.
The member must be a valid ITCare resource and must be in the same network as the other members of the loadbalancer.
Request example :
POST /compute/loadbalancers/my-service.cegedim.cloud/members
{
"resourceId": 5050706,
"port": 80,
"state": "enabled",
"name": "REBITCGDM1032",
"ip": "10.25.19.158"
}
Minimum payload must contain the following information :
Other field will be ignored. The following payload is valid:
POST /compute/loadbalancers/my-service.cegedim.cloud/members
{
"resourceId": 5050706,
"port": 80,
}
This method is synchronous (status code 200
) and will return loadbalancer's members list with the new member added :
[
{
"resourceId": 1050975,
"name": "PEB4APP01",
"port": 443,
"state": "enabled",
"status": "up",
"ip": "10.26.12.11"
},
{
"resourceId": 1050976,
"name": "PEB4APP02",
"port": 443,
"state": "enabled",
"status": "up",
"ip": "10.26.12.12"
},
{
"resourceId": 898734,
"name": "PEB4APP03",
"port": 443,
"state": "enabled",
"status": "up",
"ip": "10.26.12.13"
}
]
Note: New member will added with state enabled.
Note: Member statistic are not included in the response body
IP address of the member.
Category of the member
Family of the member
Internal type of the member of the member
Area on which the member is located
Name of the member on the loadbalancer
port of the member., example: 80, 443, ...
Name of the member of the member
Id of the resource. Required when an operation is performed.
serviceId on which this member belongs to
Member state. (enabled, disabled, offline)
Status of the member. (up, down, user_down)
Technical Network on which the member is located
Technology of the member
Set the state of a loadbalancer member.
The member must be a valid ITCare resource and must be a member of the specified loadbalancer.
Possible state value are :
Example :
PATCH /compute/loadbalancers/123/members/1050975
{
"operation": "disabled"
}
This method is synchronous (status code 200
) and will return loadbalancer's member object :
{
"resourceId": 1050975,
"name": PEB4APP01,
"port": 443,
"state": "disabled",
"status": "up",
"name": "PEB4APP01",
"address": "10.26.12.11"
}
This method allows to create a URL for a LoadBalancer.
name
is the name of the url.setUpDNSEnabled
setup dns or not.monitoringEnabled
enable monitoring for urlonCallSupervision
enable 24/7 monitoring for urlThis method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /loadbalancers/124/urls
{
"name": "url.cegedim.com",
"setUpDNSEnabled": false,
"monitoringEnabled": true,
"onCallSupervision": true
}
Describes a load balancer.
Indicates if monitoring will be setup.
url of load balancer. Must be unique, and fit naming rules convention., example: url.cegedim.com
^(https?:\\/\\/)?(www\\.)?[a-zA-Z][a-zA-Z0-9.-]{2,63}+$
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Indicates if a DNS record is to be set. If absent, set to false.
ssl profile of the load balancer., example: profile_wildcard.cegedim.com_secure
This method allows to update a url of load balancer.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details.Below are different operations currently implemented.
Update Monitoring for Load Balancer and its URLs
Use the update_monitoring
operation to update monitoring status for load balancer.
This method is synchronous (status code 202
).
PATCH /loadbalancers/1234/urls/5678
{
"operation": "update_monitoring",
"options": {
"state": true,
"onCall": true
}
}
Load Balancer Id, example: 123
Load Balancer Url Id, example: 123
Object describing a partial modification of an object to perform. Please refer to documentation to get list of operations available and their specific payload.
Operation to perform on target object, example: operation_name
Specific payload to pass to have the operation performed. Refer to documentation for each operation.
This method allows to create a GlusterFS cluster.
You will have to know at the minimum :
area
attribute). Areas can be available in List Regions method.name
attribute). The name can contain any lowercase characters, dashes and underscores.diskSize
attribute). The possible values are at least 10 and maximum 1024 (representing GB).admPassword
attribute). The password must be between 12 and 20 characters with at least one lowercase character, one uppercase character, one digit and one special characteruserName
attribute). Maximum size is 32 characters, lowercase characters, underscore and dashe are allowedserviceId
attribute).networkId
attribute).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /storage/glusterfs
{
"name": "mygluster01",
"diskSize": "15",
"admPassword": "mySuperPassword123!!",
"userName": "dda",
"networkId": 123,
"area":"EB-A",
"serviceId": 46922
}
The user password
^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–{}:;',?/*~$^+=<>]).{12,20}$
Area. Refer to an Area of a Region, that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
The volume configured within the configuration process of the GlusterFs cluster
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of GlusterFs cluster
[a-z0-9_\-]{5,60}$
The network Id of the ELS cluster
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Node sizing for cluster
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Product platform of the cluster
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
[a-z0-9_\-]{1,32}$
This method allows to update a GlusterFS cluster.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details.Below are different operations currently implemented.
Start
Use the start
operation to start a GlusterFS cluster.
Create nodes operation will add the new nodes in the cluster by availability zone. You can specify the availability zone you need in the request.
This method is synchronous (status code 202
).
Example :
PATCH /storage/glusterfs/1234
{
"operation": "start",
"options": {
"changeReference": "RFC_123"
}
}
Stop
Use the stop
operation to stop a GlusterFS cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "start",
"options": {
"changeReference": "RFC_123"
}
}
Resize GlusterFS instance
Use the resize
operation to resize the nodes of the GlusterFS instance and the instance itself.
This operation cannot be undone afterwards.
This method is asynchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation":"resize",
"options": {
"sizing": "2cpu4gb"
}
}
Add Volume
Use the add_volume
operation to add a Volume to a GlusterFS cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "add_volume",
"options": {
"diskSize": "42",
"userName": "dda",
"userPass":"mySuperPassw0rd42"
}
}
Resize Volume
Use the resize_volume
operation to resize a Volume to a GlusterFS cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "resize_volume",
"options": {
"name":"dda",
"diskSize": "42",
"userName": "dda"
}
}
Resize Volume
Use the delete_volume
operation to delete a Volume from a GlusterFS cluster.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "delete_volume",
"options": {
"name":"dda"
}
}
Update Monitoring
Use the update_monitoring
operation to update the monitoring state of the GlusterFS cluster.
Use the state
option to turn on/off monitoring.
Use the on_call
option to turn on/off 24/7 monitoring.
This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "update_monitoring",
"options": {
"state": true,
"on_call": true
}
}
Update Patch Party
Use the update_patch_party
operation to update the patch party scheduled plan of the GlusterFS cluster.
excluded
option to turn on/off patch party.patchGroup
option to select the patching group, the patchGroup
is optional, and is only allowed when the farm has one member.exclusionReason
option to explain the reason of excluding the resource from patch part.This method is synchronous (status code 202
).
PATCH /storage/glusterfs/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": false,
"patchGroup": "3"
}
}
}
PATCH /storage/glusterfs/1234
{
"operation": "update_patch_party",
"options": {
"patchParty": {
"excluded": true,
"exclusionReason": "I want to handle this by myself"
}
}
}
This method allows to create an OverDrive instance.
You will have to know at the minimum :
region
attribute)name
attribute)sizing
attribute). The possible values are : XS (Up to 50 users total), S(Up to 1M ppm), M(Up to 10M ppm), L(Up to 100M ppm), XL(More than 100M ppm)password
attribute). The password must be At least one lowercase, one uppercase, one digit, one special character, minimum length must be 12.serviceId
attribute).driveSiteUri
).This method is asynchronous (status code 202
) and you'll have to wait for async action to be completed by checking its status.
POST /storage/overdrive
{
"name": "poverdrive01",
"region": "EB",
"sizing":"XS",
"serviceId":"123",
"password": "Password!!??",
"driveSiteUri": "demo.mydrive.cegedim.cloud"
}
Indicates if backup has to be setup on instance. If absent, backup will be setup automatically if instance is in a production service.
BackupPolicy id. Refers to desired backup policy to be applied for the database, must be set when backup is enabled.
Define the Drive URL to be configured in OverDrive
Indicates if alerting should be activated. If absent, set to false.
Indicates if monitoring will be setup. If absent, it will be automatically be setup if this is an production environment, or if backup is enabled.
Name of OverDrive Instance
[a-z0-9\-]{4,60}$
Indicates why a production resource is not under backup.
Indicates why a production resource is not under monitoring.
Indicates why a production resource is not replicated.
Indicates if on call teams will be called on non business hours if an incident occurs on instance. If absent, set to false.
Password to connect to the OverDrive instance.
Region. that is a low-latency network area, available in List Regions method. If absent, default Area of Region will be used.
Regulation. Refer to the regulation of the Area (HDS|STANDARD). If absent, default 'STANDARD' will be used.
Indicates if replication will be setup. If absent, it will be automatically be setup if this is an production environment
BackupPolicy id. Refers to desired backup policy to be applied for the virtual machine, must be set when backup is enabled.
id of service to put instance in.
Sizing. L, XL , XS, S, M. Sizing for OverDrive instances
This method allows to update an Overdrive instance.
Structure of payload is generic and describes :
operation
you want to be performedoptions
data relative to the operation performed - see details - optional.Below are different operations currently implemented.
Start Overdrive instance
Use the start
operation to start an Overdrive instance.
Starts overdrive instance.
This method is synchronous (status code 202
).
Example :
PATCH /storage/overdrive/1234
{
"operation": "start"
}
Stop Overdrive instance
Use the stop
operation to stop the nodes of the Overdrive instance and the instance itself.
This operation cannot be undone afterwards.
This method is synchronous (status code 202
).
PATCH /storage/overdrive/1234
{
"operation":"stop"
}
id, example: 123