All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Object Storage - Features

Structure

Object Store

An Object Store is a logically delimited container for Buckets and Objects stored in cegedim.cloud Object Storage Service.

It comes with a dedicated Object User which is the only one authorized to view and manage objects within this Object Store. You can, create other Object Users.

When creating an Object Store, you have to choose:

  • A simple name

  • A data center

    • EB4 --> data will only be located in EB4 - Boulogne data center

You are not limited and can create as many Object Stores as you need.

Nevertheless, it can be better to use Bucket separation for objects within a same application, or different applications.

We recommend to use Object Store at the Project or "Group of projects" level, and Bucket at the "File typology" level.

For more information about Object Store creation, read .

Buckets

A Bucket is a logically delimited container for objects. Each object in the cegedim.cloud Object Storage Service is located in a Bucket.

A Bucket can be created using a S3 client, and has some attributes you can use to control behavior of the Bucket and its objects, for example:

  • VersioningPolicy which allows you to configure how many versions of files has to be kept by cegedim.cloud Object Storage Service

  • BucketPolicy which allows you to configure permissions and restrictions for objects in bucket

Objects

An object is what we call a file on classic file system. Each object belongs to a Bucket and has a key as unique identifier.

Note that folders does not exist in cegedim.cloud Object Storage Service, but you can use prefixes and delimiters to organize the data that you store in Buckets.

A prefix is a string of characters at the beginning of the object key name. A delimiter is a character, usually, the slash '/', used to separate each level of objects and simulate file system like structure.

For example, if you store information about customers, organized by years and month:

In this exemple '/' is the delimiter, and 'customer1/2020/' can be a prefix.

Diagram

S3 API Compatibility

Check the following page for the list of supported, unsupported S3 APIs and the special behaviors of the object storage solution of cegedim.cloud.

Endpoints

cegedim.cloud object storage solution provides two access endpoints:

    • Allow you to use Object Storage Service from the EB4 - Boulogne data center.

Geo-Replicated

For Geo-replicated Object Stores between EB4-ET1. Both endpoints allow you to access to your objects.

If you upload an object using the EB4 endpoint, EB4 will become the 'owner' of the object, and vice versa for ET1.

Authentification

Object User

Access to Buckets is done using an Object User.

When an Object Store is created, an Object User known as "Initial S3 user" is automatically created. Each Object User has an access_Key and a secret_Key. Both are randomly generated by cegedim.cloud Object Storage Service.

You can have more than one Object User by Object Store. An Object User is linked to only one Object Store, and can't be used to perform operation on another Object Store.

For more information about Object Users, refer to .

Secret Key Renewal

At any time, you have the possibility to re-generate the secret key of an Object user, for security reason or when the Object User is compromised.

When changing the secret key, you can add a "grace period", during which, both old and new secret keys are valid and accepted by cegedim.cloud Object Storage Service.

Authorizations

Authorizations are managed at the Bucket level, using Bucket Policies.

Bucket Policies allow you to have fine management of permission to apply on objects and Object Users, based or not on conditional statements, like the access_key of the Object User or the Source Address IP

When creating a Bucket, there is no Bucket Policy by default and the bucket is not public.

That mean only the Object User who created the bucket can access to it.

For more information about Bucket Policy, refer to .

Secured Transport

cegedim.cloud Object Storage Service is only available through the protocol HTTPS on port 443.

Log Management

S3 Bucket logging is not supported by cegedim.cloud Object Storage Service.

Any request or operation on the cegedim.cloud Object Storage Service are logged by cegedim.cloud internally.

Logs include operations on Object Store, Object User and also operations done at buckets and objects level (GET, PUT, DELETE,...)

If you need logs extraction on your Object Storage resources, Please contact cegedim.cloud support teams.

Features

Presigned URL

cegedim.cloud Object Storage Service supports the setting sharing objects using presigned URLs. You can share objects with other by creating presigned URL.

When you create a presigned URL, you must provide:

  • Your security credentials

  • A bucket name and an object key

  • An HTTP method (PUT for uploading objects)

  • An expiration time

The presigned URLs are valid only for the specified duration.

For more information about Presigned URL, refer to .

Bucket Policy Support

cegedim.cloud Object Storage Service supports the setting of S3 bucket policies.

Bucket policies provides specific users, or all users, conditional and granular permissions for specific actions.

Policy conditions can be used to assign permissions for a range of objects that match the condition and can be used to automatically assign permissions to newly uploaded objects.

Bucket policy example:

For more information about Bucket Policies, refer to .

Object lifecycle management

cegedim.cloud Object Storage Service support S3 Lifecycle Configuration on both version-enabled buckets and non-version-enabled buckets.

An S3 Lifecycle Configuration is a set of rules that define actions applies to a group of objects. Only Expiration actions are supported.

You can define a S3 Lifecycle Configuration to automatically delete objects.

Lifecycle configuration example:

For more information about Lifecycle Configuration, refer to .

S3 Object Lock

cegedim.cloud Object Storage Service supports Object Lock configuration.

Object Lock prevents object version deletion during a user-defined retention period. Immutable S3 objects are protected using object- or bucket-level configuration of WORM and retention attributes.

The retention policy is defined using the S3 API or bucket-level defaults.

Objects are locked for the duration of the retention period, and legal hold scenarios are also supported.

There are two lock types for Object lock:

  • Retention period: Specifies a fixed period of time during which an object version remains locked. During this period, your object version is WORM-protected and can't be overwritten or deleted.

  • Legal hold: Provides the same protection as a retention period, but it has no expiration date. Instead, a legal hold remains in place until you explicitly remove it. Legal holds are independent from retention periods.

There are two modes for the retention period:

Governance mode

Users can't overwrite or delete an object version or alter its lock settings unless they have special permissions.

With Governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the object if necessary.

You can also use Governance mode to test retention-period settings before creating a compliance-mode retention period.

  • Users cannot overwrite or delete an object version.

Compliance mode

A protected object version can't be overwritten or deleted by any user, including the root user in your account.

When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened.

Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.

  • Users cannot overwrite or delete an object version.

In Compliance mode, if you applied a wrong retention period (e.g 6 years instead of 6 days) cegedim.cloud have no possibility to delete or shorten the retention period.

A good practice is to start with Governance mode to perform tests, and then switch to Compliance mode.

For more information about Object Lock, refer to .

ET1 --> data will only be located in ET1 - Toulouse data center
  • EB4-ET1 --> Data is replicated over EB4 and ET1 and is accessible from both data centers

  • Allow you to use Object Storage Service from the ET1 - Toulouse data center.

  • Users with s3:PutObjectRetention permission can increase an object retention period.

  • Users with special s3:BypassGovernanceRetention permission can remove or shorten an object retention.

  • Users with s3:BypassGovernanceRetention permission can also delete locked objects.

  • Users with s3:PutObjectRetention permission can increase an object retention period.

  • User cannot remove or shorten an object retention.

  • Object Storage - Get started
    S3 API compatibility
    https://storage-eb4.cegedim.cloud
    https://storage-et1.cegedim.cloud
    Manage Object Users
    Bucket Policies
    Presigned URL
    Bucket Policies
    Bucket Lifecycle
    Object Lock
    Object storage diagram
    customer1/2020/03
    customer1/2020/04
    customer1/2021/05
    customer2/2020/03
    customer2/2021/02
    {
        "Version": "2012-10-17",
        "Id": "policyExample",
        "Statement":[
            {
                "Sid":"Granting PutObject permission to user24",
                "Effect":"Allow",
                "Principal": "user24 ",
                "Action":["s3:PutObject"],
                "Resource":["mybucket/*"],
                "Condition": {
                    "StringEquals": {"s3:x-amz-server-side-encryption": [ "AES256"]
                }
            }
        ]For more information about Bucket Policies, refer to the How To
    }
    {
        "Rules": [
            {
                "Expiration": {
                    "Days": 30
                },
                "ID": "lifecycle-expire-non-current-and-mpu",
                "Prefix": "",
                "Status": "Enabled",
                "NoncurrentVersionExpiration": {
                    "NoncurrentDays": 1
                },
                "AbortIncompleteMultipartUpload": {
                    "DaysAfterInitiation": 1
                }
            }
        ]
    }

    Presigned URL

    cegedim.cloud Object Storage Service support presigned URLs to grant access to objects without needing credentials.

    Presigned URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an Access_Key, expiration time, and Sigv4 signature as query parameters to the S3 object.

    Also, presigned URLs allow you to grant someone right to upload a specific object in your Bucket.

    There are two common use cases when you may want to use them:

    • Simple, occasional sharing of private files

    • Frequent, programmatic access to view an object in an application

    • Frequent, programmatic access to upload an object through an application

    Generating a Presigned URL (download)

    We use aws s3 and aws s3api command line tools from AWSCLIv2 on Linux.

    ${S3_ENDPOINT} & ${S3_PROFILE} are environment variables.

    In this example, the generated URL have an expiration of 10 minutes. After this time, the object will no longer be accessible.

    --expires-in (integer) Number of seconds until the presigned URL expires. Default value is 3600 seconds.

    The maximum expiration time is 7 Days.

    Generating a Presigned URL (upload)

    If an object with the same key already exists in the bucket as specified in the presigned URL, the existing object will be overridden.

    aws s3 and aws s3api don't support upload presigned url generation.

    You need to use AWS SDK to create Presigned Url for Upload.

    Below, a simple example using [AWS SDK for Python (Boto3)]()

    Upload Presigned URL work only with path style addressing.

    Replace aws_access_key_id and aws_secret_access_key by our own credentials.

    ExpiresIn (integer): Number of seconds until the presigned URL expires. Default value is 3600 seconds. The maximum expiration time is 7 Days.

    You can use tool like curl to upload your object to your bucket, using the URL generated previously:

    Limitation and Best Practices

    Object Stores

    Limitations

    The following rules apply to the naming of Object Stores in cegedim.cloud Object Storage Service:

    Must be between one and 255 characters in length

  • Can include hyphen (-) and alphanumeric characters ([a-zA-Z0-9])

  • Avoid the use of underscores (_)

  • Avoid the use of UPPERCASE letters

  • Cannot start with a dot (.)

  • Cannot contain a double dot (..)

  • Cannot end with a dot (.)

  • Cannot contain spaces

  • Must not be formatted as IPv4 address

  • Object Store names must be unique in cegedim.cloud Object Storage Service

  • Best Practices

    • Create Object Store per Business Unit or per application

    • Geo Replication can't be enable or disable once the Object Store created

    • For best performance, recommended to have less than 1000 buckets in a single Object Store

    • Object Stores should be DNS compatible

    Buckets

    Limitations

    The following rules apply to the naming of S3 buckets in cegedim.cloud Object Storage Service:

    • Must be between 3 and 255 characters in length.

    • Can include dot (.), hyphen (-), and underscore (_) characters and alphanumeric characters ([a-zA-Z0-9])

    • Avoid the use of UPPERCASE letters

    • Can start with a hyphen (-) or alphanumeric character

    • Cannot start with a dot (.)

    • Cannot contain a double dot (..)

    • Cannot end with a dot (.)

    • Cannot contain spaces

    • Must not be formatted as IPv4 address

    • Bucket names must be unique within an Object Store

    Best Practices

    • Use buckets for specific environment, workflow, or uses. For instance: dev, test, finance, operations, etc

    • In an Object Store with the Geo Replication enabled, create buckets using the closest (EB4 or ET1) endpoint to the application accessing and updating the objects

    There is overhead involved with checking the latest copy if the ownership of object is at a remote endpoint

    • For best performance, recommended to have less than 1000 buckets in a single Object Store

    • Bucket names should be DNS compatible

    Objects

    Limitations

    The following rules apply to the naming of Objects in cegedim.cloud Storage Service:

    • Cannot be null or an empty string

    • Length range must be between 1 and 255 (Unicode char)

    • Avoid using space

    • No validation on characters

    Best Practices

    • Object names should be DNS compatible

    Small Objects vs Large Objects

    This section provides useful tips when handling small and large objects within your application. It also provides some information on cegedim.cloud Object Storage Service versioning and compression details and options.

    Small Objects

    A small object is considered to be an object that is less than 100 KB.

    cegedim.cloud Object Storage Service has a special internal mechanism which helps performance for data writes of small objects. It aggregates multiple small data objects queued in memory and then

    write them in a single disk operation, up to 2MB of data. This improves performance by reducing the number of round-trips to process individual writes to storage.

    Although cegedim.cloud Object Storage Service has optimizations for small writes, if there is an option in your application to define a size

    then choose a larger size (e.g. 1 MB rather than 64 KB) or a value that aligns to the cegedim.cloud Object Storage Service internal buffer size of 2 MB for performance.

    Large Objects

    One of the issues with reading and writing large objects is performance.

    cegedim.cloud Object Storage Service provides certain API features to reduce the impact on performance for large objects, such as, multipart uploads. Some tips to alleviate some of the issues for large object access include:

    When working with large objects (> 100 MB), utilize the multipart upload feature. This allows pause and resume uploads for large objects.

    cegedim.cloud Object Storage Service internal buffer size is 2 MB. For size inferior to 1 GB, use multiple of 2 MB (e.g. 8 MB).

    cegedim.cloud Object Storage Service chunk size is 128MB. For size superior to 1 GB, use 128 MB part size.

    Performance throughput can be improved by parallelizing uploads within your application.

    Use APIs that allows for easy upload and download, for instance:

    • In Java, use the TransferManager

    • In .NET, use TransferUtility

    https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
    aws s3 --endpoint-url=${S3_ENDPOINT} presign s3://bucket-test/feather.ttf --expires-in 600 --profile ${S3_PROFILE}
    Output
    https://storage-eb4.cegedim.cloud/bucket-test/feather.ttf?AWSAccessKeyId=fzs37xbv5615hygx2wkm&Signature=S4jFPas53s8cnwdDieMHrhc0ddE%3D&Expires=1666821099
    #!/usr/bin/env python3
    # -*- coding: utf-8 -*-
    import boto3
    from botocore.client import Config
    s3 = boto3.client('s3')
    s3 = boto3.client(
        's3',
        aws_access_key_id='xxxxx',
        aws_secret_access_key='xxxxx',
        config=Config(s3={'addressing_style': 'path'}),
        endpoint_url='https://storage-eb4.cegedim.cloud'
    )
    bucket = "bucket-test"
    key = "feather.ttf"
    
    print(s3.generate_presigned_url('put_object', Params={'Bucket':bucket,'Key':key}, ExpiresIn=300, HttpMethod='PUT'))
    # Output
    
    # Run Python script
    ./create_presign_url_upload.py
    
    #Ouput
    https://storage-eb4.cegedim.cloud/bucket-test/feather.ttf?AWSAccessKeyId=fzs37xbv5615hygx2wkm&Signature=NI%2BvoHYhWEFPDR04ioeFfBz5fks%3D&Expires=1712056959
    curl --request PUT --upload-file feather.ttf 'https://storage-eb4.cegedim.cloud/bucket-test/feather.ttf?AWSAccessKeyId=fzs37xbv5615hygx2wkm&Signature=NI%2BvoHYhWEFPDR04ioeFfBz5fks%3D&Expires=1712056959'

    Bucket Lifecycle

    The lifecycle configuration allow you to set an expiration policy on your objects and auto-delete them.

    For example, you may need for some objects to be deleted automatically.

    In this example, we will automatically create a policy to delete objects with a key starting with reports/ after 90 days. We could use a GUI Client or AWS SDK, but we will use AWS CLI to do so.

    Limitations

    • Lifecycle is a bucket level concept.

    • Maximum of 1000 lifecycle rules per bucket is applicable.

    • There may be a delay between the expiration date and the date at which Object Storage Service removes an object.

    • Always round up the resulting time to the next day midnight UTC.

    Deleted Object cannot be restored.

    Manage lifecycle policy

    Bucket lifecycle configuration can be managed using aws s3api (other tools or SDK works too):

    • put-bucket-lifecycle

    • get-bucket-lifecycle

    • delete-bucket-lifecycle

    We use aws s3 and **aws s3api**command line tools from AWSCLIv2 on Linux.

    ${S3_ENDPOINT} & ${S3_PROFILE} are environment variables.

    Create a lifecycle policy

    Create JSON file and put your policy in it:

    Apply it to the bucket: bucket-test

    Get a lifecycle configuration

    Delete a lifecycle configuration

    Supported lifecycle configuration elements

    Name
    Description
    Required

    Filter

    • Container for elements that describe the filter identifying a subset of objects to which the lifecycle rule applies. If you specify an empty filter ("Prefix": {}), the rule applies to all objects in the bucket.

    • Type: String

    • Children: Prefix, Tag

    Yes

    ID

    • Unique identifier for the rule. The value cannot be longer than 255 characters.

    • Type: String

    • Ancestor: Rule

    No

    Key

    • Specifies the key of a tag. A tag key can be up to 128 Unicode characters in length.

    • Tag keys that you specify in a lifecycle rule filter must be unique.

    • Type: String

    • Ancestor: Tag

    Yes, if <Tag> parent is specified.

    LifecycleConfiguration

    • Container for lifecycle rules. You can add as many as 1,000 rules.

    • Type: Container

    • Children: Rule

    • Ancestor: None

    Yes

    ExpiredObjectDeleteMarker

    • On a versioned bucket (versioning-enabled or versioning-suspended bucket), you can add this element in the lifecycle configuration to direct S3 to delete expired object delete markers. On a non-versioned bucket, adding this element in a policy is meaningless because you cannot have delete markers and the element does not do anything.

    • When you specify this lifecycle action, the rule cannot specify a tag-based filter.

    • Type: String

    • Valid values: true | false (the value false is allowed, but it is no-op and S3 does not take action if the value is false)

    Yes, if Date and Days are absent.

    NoncurrentDays

    • Specifies the number of days an object is non-current before S3 can perform the associated action.

    • Type: Positive Integer when used with NoncurrentVersionExpiration.

    • Ancestor: NoncurrentVersionExpiration

    Yes

    NoncurrentVersionExpiration

    • Specifies when non-current object versions expire. Upon expiration, S3 permanently deletes the non-current object versions.

    • You set this lifecycle configuration action on a bucket that has versioning enabled (or suspended) to request that S3 delete non-current object versions at a specific period in the object's lifetime.

    • Type: Container

    • Children:

    Yes, if no other action is present in the Rule.

    Prefix

    • Object key prefix identifying one or more objects to which the rule applies. Empty prefix (<Prefix></Prefix>) indicates there is no filter based on key prefix.

    Note:


    Supports <Prefix> with and without <Filter>. (Deprecated)

    PUT Bucket lifecycle with <Filter>

    No

    "Prefix": "", # No Prefix "Prefix": "documents/",

    "Filter": { "Prefix": ""}"Filter": { "Prefix": "documents/"}

    "Prefix": "", # No Prefix "Prefix": "documents/",

    "Filter": { "Prefix": ""}"Filter": { "Prefix": "documents/"}

    Rule

    • Container for a lifecycle rule. A lifecycle configuration can contain as many as 1,000 rules.

    • Type: Container

    • Ancestor: LifecycleConfiguration

    Yes

    Status

    • if Enabled, S3 executes the rule as scheduled. If Disabled, S3 ignores the rule.

    • Type: String

    • Ancestor: Rule

    • Valid values: Enabled, Disabled.

    Yes

    Value

    • Specifies the value for a tag key. Each object tag is a key-value pair.

    • Tag value can be up to 256 Unicode characters in length.

    • Type: String

    • Ancestor: Tag

    Yes, if <Tag> parent is specified.

    And

    • Container for specify rule filters. These filters determine the subset of objects to which the rule applies.

    • Type: String

    • Ancestor: Rule

    Yes, if you specify more than one filter condition (for example, one prefix and one or more tags).

    Date

    • Date when you want S3 to take the action.

    • The date value must conform to the ISO 8601 format. The time is always midnight UTC.

    • Type: String

    • Ancestor: Expiration

    Yes, if Days and ExpiredObjectDeleteMarker are absent.

    Days

    • Specifies the number of days after object creation when the specific rule action takes effect.

    • Type: Nonnegative Integer when used with Transition, Positive Integer when used with Expiration.

    • Ancestor: Expiration

    Yes, if Date and ExpiredObjectDeleteMarker are absent.

    Expiration

    • This action specifies a period in an object's lifetime when S3 should take the appropriate expiration action. The action S3 takes depends on whether the bucket is versioning-enabled.

    • If versioning has never been enabled on the bucket, S3 deletes the only copy of the object permanently. Otherwise, if your bucket is versioning-enabled (or versioning is suspended), the action applies only to the current version of the object. A versioning-enabled bucket can have many versions of the same object, one current version, and zero or more non-current versions.

    • Instead of deleting the current version, S3 makes it a non-current version by adding a delete marker as the new current version.

    Note:


    • If your bucket state is versioning-suspended, S3 creates a delete marker with version ID null. If you have a version with version ID null, then S3 overwrites that version.

    • To set expiration for non-current objects, you must use the NoncurrentVersionExpiration action.

    • Type: Container

    • Children:

    Yes, if no other action is present in the Rule.

    delete_after_3_days.json
    {
      "Rules": [
        {
          "Filter": {
            "Prefix": ""
          },
          "Expiration": {
            "Days": 3
          },
          "Status": "Enabled",
          "ID": "Delete After 3 days."
        }
      ]
    }
    aws s3api --endpoint-url=${S3_ENDPOINT} put-bucket-lifecycle --bucket bucket-test --lifecycle-configuration file://delete_after_3days.json --profile ${S3_PROFILE}
    aws s3api --endpoint-url=${S3_ENDPOINT} get-bucket-lifecycle --bucket bucket-test --profile ${S3_PROFILE}
    aws s3api --endpoint-url=${S3_ENDPOINT} delete-bucket-lifecycle --bucket bucket-test --profile ${S3_PROFILE}
    Days
    or
    Date
  • Ancestor: Rule

  • Ancestor: Rule

  • Ancestor: Expiration.

  • NoncurrentDays
  • Ancestor: Rule

  • There can be at most one Prefix in a lifecycle rule Filter.

  • Type: String

  • Ancestor: Filter or And (if you specify multiple filters such as a prefix and one or more tags)

  • "Prefix": "", # No Prefix "Prefix": "documents/",

    "Filter": { "Prefix": ""}"Filter": { "Prefix": "documents/"}

    Bucket Policies

    Bucket Policies provides specific users, or all users, conditional and granular permissions for specific actions.

    Policy conditions can be used to assign permissions for a range of objects that match the condition and can be used to automatically assign permissions to newly uploaded objects.

    How access to resources is managed when using the S3 protocol is described in https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html and you can use the information as the basis for understanding and using S3 bucket policies in cegedim.cloud Object Storage Service.

    This section provides basic information about the use of bucket policies.

    Manage Bucket Policies

    Bucket policies can be managed using aws s3api (other tools or SDK works too):

    • get-bucket-policy

    • put-bucket-policy

    • delete-bucket-policy

    We use aws s3 and aws s3api command line tools from AWSCLIv2 on Linux.

    ${S3_ENDPOINT} and ${S3_PROFILE} are environment variables.

    Create Bucket Policy

    Create JSON file and configure your policy :

    The Principal element specifies the Object User Access Key that is allowed or denied access to a resource.

    You can use a wildcard '*' to mean all Object Users.

    Be careful, set a wildcard as '

    Apply it to the bucket: bucket-test

    Get Bucket Policy

    Delete Bucket Policy

    Bucket Policy management scenarios

    Grant bucket permissions to a user

    Grant read only bucket permissions to a user

    Grant bucket permissions to all users (public access)

    cegedim.cloud Object Storage Service is directly accessible from Internet.

    If you grant public access to your Bucket or a subset of your Bucket, anyone can GET your objects.

    For more information, please read .

    Accessing Bucket via baseURL in a Web Browser

    With public access, Bucket content can be accessed directly using a WEB browser.

    The URL to access to a public Bucket follow this format: https://<object-store_name>.storage-[eb4|et1].cegedim.cloud/<bucket_name>

    Example : https://cos-cegedimit-myit.storage-eb4.cegedim.cloud/my-bucket

    Grant bucket permissions to all users (public access) to Objects under a specific prefix

    cegedim.cloud Object Storage Service is directly accessible from Internet.

    If you grant public access to your Bucket or a subset of your Bucket, anyone can GET your objects.

    With the following policy, all objects in the bucket my-bucket and under the prefix public/ are publicly accessible:

    Supported Policy Operations & Conditions

    Supported bucket policy operations

    Permissions for Object Operations

    Permission keyword
    Supported S3 operations

    Permissions for Bucket Operations

    Permission keyword
    Supported S3 operations

    Permissions for Bucket Sub-resource Operations

    Permission keyword
    Supported S3 operations

    Supported bucket policy conditions

    The condition element is used to specify conditions that determine when a policy is in effect.

    The following tables show the condition keys that are supported by cegedim.cloud Object Storage Service and that can be used in condition expressions.

    Supported generic AWS condition keys

    Key name
    Description
    Applicable operators

    Supported S3-specific condition keys for object operations

    Key name
    Description
    Applicable permissions

    Supported S3-specific condition keys for bucket operations

    Key name
    Description
    Applicable permissions
    Principal
    ' in a Bucket Policy means
    anyone
    can access to resources and perform allowed actions.

    s3:PutObjectVersionAcl

    PUT Object (for a Specific Version of the Object)

    s3:DeleteObject

    DELETE Object

    s3:DeleteObjectVersion

    DELETE Object (a Specific Version of the Object)

    s3:ListMultipartUploadParts

    List Parts

    s3:AbortMultipartUpload

    Abort Multipart Upload

    s3:GetBucketPolicy

    GET Bucket policy

    s3:DeleteBucketPolicy

    DELETE Bucket policy

    s3:PutBucketPolicy

    PUT Bucket policy

    aws:UserAgent

    Used to check the requester's client application.

    String operator

    aws:username

    Used to check the requester's user name.

    String operator

    s3:max-keys

    Limit the number of keys Object Storage Service returns in response to the Get Bucket (List Objects) request by requiring the user to specify the max-keys parameter.

    s3:ListBucket

    s3:ListBucketVersions

    s3:GetObject applies to latest version for a version-enabled bucket

    GET Object, HEAD Object

    s3:GetObjectVersion

    GET Object, HEAD Object This permission supports requests that specify a version number

    s3:PutObject

    PUT Object, POST Object, Initiate Multipart Upload, Upload Part, Complete Multipart Upload PUT Object

    s3:GetObjectAcl

    GET Object ACL

    s3:GetObjectVersionAcl

    GET ACL (for a Specific Version of the Object)

    s3:PutObjectAcl

    s3:DeleteBucket

    DELETE Bucket

    s3:ListBucket

    GET Bucket (List Objects), HEAD Bucket

    s3:ListBucketVersions

    GET Bucket Object versions

    s3:GetLifecycleConfiguration

    GET Bucket lifecycle

    s3:PutLifecycleConfiguration

    PUT Bucket lifecycle

    s3:GetBucketAcl

    GET Bucket acl

    s3:PutBucketAcl

    PUT Bucket acl

    s3:GetBucketCORS

    GET Bucket cors

    s3:PutBucketCORS

    PUT Bucket cors

    s3:GetBucketVersioning

    GET Bucket versioning

    s3:PutBucketVersioning

    aws:CurrentTime

    Used to check for date/time conditions

    Date operator

    aws:EpochTime

    Used to check for date/time conditions using a date in epoch or UNIX time (see Date Condition Operators).

    Date operator

    aws:principalType

    Used to check the type of principal (user, account, federated user, etc.) for the current request.

    String operator

    aws:SourceIp

    Used to check the requester's IP address.

    s3:x-amz-acl

    Sets a condition to require specific access permissions when the user uploads an object.

    s3:PutObject

    s3:PutObjectAcl

    s3:PutObjectVersionAcl

    s3:x-amz-grant-permission

    (for explicit permissions), where permission can be:read, write, read-acp, write-acp, full-control

    Bucket owner can add conditions using these keys to require certain permissions.

    s3:PutObject

    s3:PutObjectAcl

    s3:PutObjectVersionAcl

    s3:x-amz-server-side-encryption

    Requires the user to specify this header in the request.

    s3:PutObject

    s3:PutObjectAcl

    s3:VersionId

    Restrict the user to accessing data only for a specific version of the object

    s3:x-amz-acl

    Set a condition to require specific access permissions when the user uploads an object

    s3:CreateBucket

    s3:PutBucketAcl

    s3:x-amz-grant-permission

    (for explicit permissions), where permission can be:read, write, read-acp, write-acp, full-control

    Bucket owner can add conditions using these keys to require certain permissions

    s3:CreateBucket

    s3:PutBucketAcl

    s3:prefix

    Requires the user to specify this header in the request.

    s3:PutObject

    s3:PutObjectAcl

    s3:delimiter

    Require the user to specify the delimiter parameter in the Get Bucket (List Objects) request.

    Manage Bucket access

    PUT Object ACL

    PUT Bucket versioning

    String operator

    s3:PutObject

    s3:PutObjectAcl

    s3:DeleteObjectVersion

    s3:PutObject

    s3:PutObjectAcl

    s3:DeleteObjectVersion

    {
        "Version": "2012-10-17",
        "Id": "S3PolicyId1",
        "Statement": [
            {
                "Sid": "Grant permission to <access_key>",
                "Effect": "Allow",
                "Principal": ["<access_key>"],
                "Action": [ "s3:PutObject","s3:GetObject" ],
                "Resource":[ "bucket-test/*" ]
            }
        ]
    }
    aws s3api --endpoint-url=${S3_ENDPOINT} put-bucket-policy --bucket bucket-test --policy file://policy.json --profile ${S3_PROFILE}
    aws s3api --endpoint-url=${S3_ENDPOINT} get-bucket-policy --bucket bucket-test --profile ${S3_PROFILE}
    aws s3api --endpoint-url=${S3_ENDPOINT} delete-bucket-policy --bucket bucket-test --profile ${S3_PROFILE}
    {
        "Version": "2012-10-17",
        "Id": "S3PolicyId1",
        "Statement": [
            {
                "Sid": "Grant permission to user1",
                "Effect": "Allow",
                "Principal": ["<access_key>"],
                "Action": [ "s3:PutObject","s3:GetObject" ],
                "Resource":[ "arn:aws:s3:::mybucket/*" ]
            }
        ]
    }
    {
      "Version": "2012-10-17",
      "Id": "s3ReadOnlyforUser",
      "Statement": [
        {
          "Sid": "Grant read permission to user1",
          "Effect": "Allow",
          "Principal": ["<access_key>"],
          "Action": [
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "arn:aws:s3:::mybucket",
            "arn:aws:s3:::mybucket/*"
          ]
        }
      ]
    }
    Public bucket
    {
        "Version": "2012-10-17",
        "Id": "S3PolicyId2",
        "Statement": [
            {
                "Sid": "Public Access to mybucket",
                "Effect": "Allow",
                "Principal": "*",
                "Action": [ "s3:GetObject" ],
                "Resource":[ "arn:aws:s3:::mybucket/*" ]
            }
        ]
    }
    {
      "Version":"2012-10-17",
      "Statement":[
        {
          "Sid":"public-access-based-on-prefix",
          "Effect":"Allow",
          "Principal": "*",
          "Action":["s3:GetObject"],
          "Resource":["arn:aws:s3:::my-bucket/public/*"]
          }
      ]
    }
    (warning)

    Object Lock

    Object lock prevents object version deletion during a user-defined retention period. Immutable S3 objects are protected using object- or bucket-level configuration of WORM and retention attributes.

    The retention policy is defined using the S3 API or bucket-level defaults.

    Objects are locked for the duration of the retention period, and legal hold scenarios are also supported.

    There are two lock types for object lock:

    • Retention period: Specifies a fixed period of time during which an object version remains locked. During this period, your object version is WORM-protected and can't be overwritten or deleted.

    • Legal hold: Provides the same protection as a retention period, but it has no expiration date. Instead, a legal hold remains in place until you explicitly remove it. Legal holds are independent from retention periods.

    There are two modes for the retention period:

    Governance mode

    Users can't overwrite or delete an object version or alter its lock settings unless they have special permissions.

    With Governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the object if necessary.

    You can also use Governance mode to test retention-period settings before creating a compliance-mode retention period.

    • Users cannot overwrite or delete an object version.

    Compliance mode

    A protected object version can't be overwritten or deleted by any user, including the root user in your account.

    When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened.

    Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.

    • Users cannot overwrite or delete an object version.

    In Compliance mode, if you applied a wrong retention period (e.g 6 years instead of 6 days) cegedim.cloud has no possibility to delete or shorten the retention period.

    A good practice is to start with Governance mode to perform tests, and then switch to Compliance mode.

    For more information, see:

    Object Lock Requirements

    • Object lock requires ADO (Access During Outage) disabled at the Object Store level

      • That means without ADO, Object operations of read, create, update and delete as well as list buckets not owned by an online site, will fail

      • Object Stores without ADO cannot be created using ITCare and must therefore be created manually by cegedim.cloud teams

    Lifecycle

    Objects under lock are protected from lifecycle deletions.

    Lifecycle logic is made difficult because of the variety of behavior of different locks.

    From a lifecycle point of view there are locks without a date, locks with date that can be extended, and locks with date that can be decreased.

    • For Compliance mode, the retain until date can't be decreased, but can be increased

    • For Governance mode, the lock date can increase, decrease, or be removed

    • For legal hold, the lock is indefinite

    Condition Keys

    Access control using IAM policies is an important part of the object lock functionality.

    The s3:BypassGovernanceRetention permission is important because it is required to delete a WORM-protected object in Governance mode.

    IAM policy conditions have been defined below to allow you to limit what retention period and legal hold can be specified in objects.

    It is not possible to manage IAM Policies with ITCare.

    Condition key
    Description

    These condition keys can be used inside bucket and IAM policies to control object lock behaviors.

    Example: ensure the retention days does not exceed 5 years

    Examples and use cases

    Buckets

    We use aws s3 and aws s3api command line tools from on Linux. ${S3_ENDPOINT} and ${S3_PROFILE} are environment variables.

    Create a bucket with object lock enabled:

    You can only enable Object lock for new buckets.

    You can't enable Object lock on an existing Bucket.

    Add a lock configuration on your bucket:

    Get the current lock configuration on a bucket:

    Objects

    In this context, there is no lock configuration defined at the bucket level but the bucket was created with --object-lock-enabled-for-bucket.

    We use aws s3 and aws s3api command line tools from on Linux. ${S3_ENDPOINT} and ${S3_PROFILE} are environment variables.

    Put an object into a bucket:

    Apply retention period on the object :

    Use head-object method to get object metadata and get retention information:

    Increase retention (+1 days):

    Delete the object:

    Check version:

    This will display all object's versions as well as the delete marker, created when we delete the object, previously.

    Delete the delete marker using the version Id:

    Upload a new version of the object:

    feather.ttf object now has two versions.

    Delete a specific version:

    The deleted version "becomes" a delete marker.

    Lister les versions :

    Use the --bypass-governance-retention option to bypass the governance policy and delete the delete marker:

    List versions:

    Delete the latest version of the object with --bypass-governance-retention:

    List versions:

    Empty result.

    List bucket content:

    Empty bucket.

    Object Lock configuration

    Found example of lock configuration that can be applied on a bucket.

    The retention will be applied on each object put in the bucket.

    Lock Configuration Structure.

    Lock configuration is a JSON document:

    Mode

    The default Object Lock retention mode you want to apply to new objects placed in the specified bucket. Must be used with either Days or Years.

    Possible values are COMPLIANCE or GOVERNANCE.

    Days

    The number of days that you want to specify for the default retention period. Must be used with Mode.

    Years

    The number of years that you want to specify for the default retention period. Must be used with Mode.

    Days and Years are mutually exclusive

    Governance configuration
    Compliance configuration

    S3 Browser

    is a freeware Windows client for .

    S3 Browser doesn't allow you to manage locks on buckets or objects.

    With S3 Browser you can see headers on objects and get the current lock retention applied on the object:

  • Users with s3:PutObjectRetention permission can increase an object retention period.

  • Users with special s3:BypassGovernanceRetention permission can remove or shorten an object retention.

  • Users with s3:BypassGovernanceRetention permission can also delete locked objects.

  • Users with s3:PutObjectRetention permission can increase an object retention period.

  • User cannot remove or shorten an object retention.

  • Object lock only works with IAM (not legacy accounts)

  • IAM accounts are not managed by ITCare and must therefore be created manually by cegedim.cloud teams

  • Object lock works only with versioned buckets

  • Enabling locking on the bucket automatically makes it versioned

  • Once bucket locking is enabled, it is not possible to disable object lock or suspend versioning for the bucket

  • Object lock requires FS (File System) disabled on bucket

  • Object lock is only supported by S3 API

  • A bucket has a default configuration including a retention mode (governance or compliance) and a retention period (which is days or years)

  • Object locks apply to individual object versions only

  • Different versions of a single object can have different retention modes and periods

  • A lock prevents an object from being deleted or overwritten. Overwritten does not mean that new versions can't be created (new versions can be created with their own lock settings)

  • An object can still be deleted version-wise. It creates a delete marker and the version still exists and is locked

  • Compliance mode is stricter : locks can't be removed, decreased, or downgraded to governance mode

  • Governance mode is less strict : locks can be removed, bypassed, or even elevated to compliance mode

  • Updating an object version's metadata, which occurs when you place or alter an object lock, doesn't overwrite the object version or reset its Last-Modified timestamp

  • Retention period can be placed on an object explicitly, or implicitly through a bucket default setting

  • Placing a default retention setting on a bucket doesn't place any retention settings on objects that already exist in the bucket

  • Changing a bucket's default retention period doesn't change the existing retention period for any objects in that bucket

  • Object lock and traditional bucket/object retention can co-exist

  • s3:object-lock-legal-hold

    Enables enforcement of the specified object legal hold status

    s3:object-lock-mode

    Enables enforcement of the specified object retention mode

    s3:object-lock-retain-until-date

    Enables enforcement of a specific retain-until-date

    s3:object-lock-remaining-retention-days

    Enables enforcement of an object relative to the remaining retention days

    AWSCLIv2
    AWSCLIv2
    S3 Browser
    Amazon S3
    aws s3api --endpoint-url=${S3_ENDPOINT} create-bucket --bucket bucket-with-lock --object-lock-enabled-for-bucket --profile ${S3_PROFILE}
    Output
    {
        "Location": "/bucket-with-lock"
    }
    governance.json
    {
      "ObjectLockEnabled": "Enabled",
      "Rule": {
        "DefaultRetention": {
          "Mode": "GOVERNANCE",
          "Days": 1
        }
      }
    }
    aws s3api --endpoint-url=${S3_ENDPOINT} put-object-lock-configuration --bucket bucket-with-lock --object-lock-configuration file://governance.json --profile ${S3_PROFILE}
    aws s3api --endpoint-url=${S3_ENDPOINT} get-object-lock-configuration --bucket bucket-with-lock --profile ${S3_PROFILE}
    Output
    {
        "ObjectLockConfiguration": {
            "ObjectLockEnabled": "Enabled",
            "Rule": {
                "DefaultRetention": {
                    "Mode": "GOVERNANCE",
                    "Days": 1
                }
            }
        }
    }
    aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 cp feather.ttf s3://bucket-test
    Output
    upload: ./feather.ttf to s3://bucket-test/feather.ttf
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api put-object-retention --retention Mode=GOVERNANCE,RetainUntilDate=2022-08-26T17:00:00 --bucket bucket-test --key feather.ttf
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api head-object --bucket bucket-test --key feather.ttf
    Output
    {
        "LastModified": "Thu, 25 Aug 2022 12:32:09 GMT",
        "ContentLength": 81512,
        "ETag": "\"2232dadea2f05fa28e3f08b5b3346df9\"",
        "VersionId": "1661430729953",
        "ContentType": "font/ttf",
        "ServerSideEncryption": "AES256",
        "Metadata": {},
        "ObjectLockMode": "GOVERNANCE",
        "ObjectLockRetainUntilDate": "2022-08-26T17:00:00.000Z"
    }
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api put-object-retention --retention Mode=GOVERNANCE,RetainUntilDate=2022-08-27T17:00:00 --bucket bucket-test --key feather.ttf
    Output
    {
        "LastModified": "Thu, 25 Aug 2022 12:32:09 GMT",
        "ContentLength": 81512,
        "ETag": "\"2232dadea2f05fa28e3f08b5b3346df9\"",
        "VersionId": "1661430729953",
        "ContentType": "font/ttf",
        "ServerSideEncryption": "AES256",
        "Metadata": {},
        "ObjectLockMode": "GOVERNANCE",
        "ObjectLockRetainUntilDate": "2022-08-27T17:00:00.000Z"
    }
    aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 rm s3://bucket-test/feather.ttf
    Output
    delete: s3://bucket-test/feather.ttf
    # List the content of the bucket, the feather.ttf is not display anymore
    aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://bucket-test/
     
    # No Output
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api list-object-versions --bucket bucket-test --prefix feather.ttf
    Output
    {
        "Versions": [
            {
                "ETag": "\"1747b668712195f92c827c7b23a169fc\"",
                "Size": 5,
                "StorageClass": "STANDARD",
                "Key": "feather.ttf",
                "VersionId": "1661431782902",
                "IsLatest": false,
                "LastModified": "2022-08-25T12:49:42.902Z",
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:root",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:root"
                }
            }
        ],
        "DeleteMarkers": [
            {
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:user/cloud",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:user/cloud"
                },
                "Key": "feather.ttf",
                "VersionId": "1661431795694",
                "IsLatest": true,
                "LastModified": "2022-08-25T12:49:55.694Z"
            }
        ]
    }
    aws --profile=${S3_PROFILE} --endpoint ${S3_ENDPOINT} s3api delete-object --bucket bucket-test --key feather.ttf --version-id 1661431795694
    Output
    {
        "DeleteMarker": true,
        "VersionId": "1661431795694"
    }
    # Once delete marker deleted, the object can be listed again
    aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://bucket-test/feather.ttf
    Output
    2022-08-25 14:49:42          5 feather.ttf
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api list-object-versions --bucket bucket-test --prefix feather.ttf
    Output
    {
        "Versions": [
            {
                "ETag": "\"2232dadea2f05fa28e3f08b5b3346df9\"",
                "Size": 81512,
                "StorageClass": "STANDARD",
                "Key": "feather.ttf",
                "VersionId": "1661432092692",
                "IsLatest": true,
                "LastModified": "2022-08-25T12:54:52.692Z",
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:root",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:root"
                }
            },
            {
                "ETag": "\"1747b668712195f92c827c7b23a169fc\"",
                "Size": 5,
                "StorageClass": "STANDARD",
                "Key": "feather.ttf",
                "VersionId": "1661431782902",
                "IsLatest": false,
                "LastModified": "2022-08-25T12:49:55.694Z",
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:root",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:root"
                }
            }
        ]
    }
    aws --profile=${S3_PROFILE} --endpoint ${S3_ENDPOINT} s3api delete-object --bucket bucket-test --key feather.ttf --version-id 1661432092692
    Output
    {
        "VersionId": "1661432092692"
    }
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api list-object-versions --bucket bucket-test --prefix feather.ttf
    Output
    {
        "Versions": [
            {
                "ETag": "\"1747b668712195f92c827c7b23a169fc\"",
                "Size": 5,
                "StorageClass": "STANDARD",
                "Key": "feather.ttf",
                "VersionId": "1661431782902",
                "IsLatest": false,
                "LastModified": "2022-08-25T12:49:55.694Z",
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:root",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:root"
                }
            }
        ],
        "DeleteMarkers": [
            {
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:user/cloud",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:user/cloud"
                },
                "Key": "feather.ttf",
                "VersionId": "1661432197362",
                "IsLatest": true,
                "LastModified": "2022-08-25T12:56:37.362Z"
            }
        ]
    }
    aws --profile=${S3_PROFILE} --endpoint ${S3_ENDPOINT} s3api delete-object --bucket bucket-test --key feather.ttf --version-id 1661432197362 --bypass-governance-retention
    Output
    {
        "DeleteMarker": true,
        "VersionId": "1661432197362"
    }
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api list-object-versions --bucket bucket-test --prefix feather.ttf
    Output
    {
        "Versions": [
            {
                "ETag": "\"1747b668712195f92c827c7b23a169fc\"",
                "Size": 5,
                "StorageClass": "STANDARD",
                "Key": "feather.ttf",
                "VersionId": "1661431782902",
                "IsLatest": true,
                "LastModified": "2022-08-25T12:49:55.694Z",
                "Owner": {
                    "DisplayName": "urn:ecs:iam::cos-cegedimit-test-lock:root",
                    "ID": "urn:ecs:iam::cos-cegedimit-test-lock:root"
                }
            }
        ]
    }
    aws --profile=${S3_PROFILE} --endpoint ${S3_ENDPOINT} s3api delete-object --bucket bucket-test --key feather.ttf --version-id 1661431782902 --bypass-governance-retention
    Output
    {
        "VersionId": "1661431782902"
    }
    aws --profile=${PROFILE} --endpoint ${S3_ENDPOINT} s3api list-object-versions --bucket bucket-test --prefix feather.ttf
    aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://bucket-test/
    {
      "ObjectLockEnabled": "Enabled",
      "Rule": {
        "DefaultRetention": {
          "Mode": "GOVERNANCE"|"COMPLIANCE",
          "Days": integer,
          "Years": integer
        }
      }
    }
    {
        "ObjectLockConfiguration": {
            "ObjectLockEnabled": "Enabled",
            "Rule": {
                "DefaultRetention": {
                    "Mode": "GOVERNANCE",
                    "Years": 1
                }
            }
        }
    }
    {
        "ObjectLockConfiguration": {
            "ObjectLockEnabled": "Enabled",
            "Rule": {
                "DefaultRetention": {
                    "Mode": "COMPLIANCE",
                    "Days": 60
                }
            }
        }
    }
    (avertissement)
    (avertissement)

    S3 API compatibility

    Supported S3 APIs

    Methods
    Notes

    GET Service

    cegedim.Cloud Object Storage Service supports marker and max-keys parameters to enable paging of bucket list.

    For example:

    DELETE Bucket

    Unsupported S3 APIs

    Methods
    Notes

    Specific behaviors

    Specific behaviors compared to AWS API :

    Creation of buckets using names with fewer than three characters fails with :


    When creating a bucket or object with empty content, cegedim.cloud Object Storage Service returns 400 invalid content-length value, which differs from AWS which returns 400 Bad Request.


    Copying an object to another bucket that indexes the same user metadata index key but with a different datatype is not supported and fails with 500 Server Error.d


    When listing the objects in a bucket, if you use a prefix and delimiter but supply an invalid marker, cegedim.cloud Object Storage Service throws 500 Server Error, or 400 Bad Request for a file system-enabled bucket.

    However, AWS returns 200 OK and the objects are not listed.


    For versioning enabled buckets, cegedim.cloud Object Storage Service does not create a delete marker when a deleted object is deleted again.

    This is different from AWS, which always inserts delete marker for deleting deleted objects in versioning enabled buckets.

    This change in behavior is only applicable when the deleted object is deleted again from owner zone.


    When an attempt is made to create a bucket with a name that already exists, the behavior of cegedim.cloud Object Storage Service can differ from AWS.

    AWS always returns 409 Conflict when a user who has FULL_CONTROL permissions on the bucket, or any other permissions, attempts to recreate the bucket. When an Object User who has FULL_CONTROL or WRITE_ACP on the bucket attempts to recreate the bucket,

    cegedim.cloud Object Storage Service returns 200 OK and the ACL is overwritten, however, the owner is not changed. An Object User with WRITE/READ permissions will get 409 Conflict if they attempt to recreate a bucket.

    Where an attempt to recreate a bucket is made by the bucket owner, Object Storage Service returns 200 OK and overwrites the ACL. AWS behaves in the same way.

    Where a user has no access privileges on the bucket, an attempt to recreate the bucket throws a 409 Conflict error. AWS behaves in the same way.

    GET Bucket requestPayment

    cegedim.cloud Object Storage Service uses its own model for payments.

    GET Bucket website

    PUT Bucket logging

    PUT Bucket notification

    Notification is only defined for the reduced redundancy feature in S3. cegedim.cloud Object Storage Service does not currently support notifications.

    PUT Bucket tagging

    PUT Bucket requestPayment

    cegedim.cloud Object Storage Service uses its own model for payments.

    PUT Bucket website

    Object APIs

    GET Object torrent

    POST Object

    POST Object restore

    This operation is related to AWS Glacier, which is not supported.

    DELETE Bucket cors

    DELETE Bucket lifecycle

    Only the expiration part is supported in lifecycle.

    Policies related to archiving (AWS Glacier like) are not supported.

    Lifecycle is not supported on file system-enabled buckets.

    DELETE Bucket policy

    GET Bucket (List Objects)

    For file system-enabled buckets, / is the only supported delimiter when listing objects in the bucket.

    GET Bucket cors

    GET Bucket acl

    GET Bucket lifecycle

    Only the expiration part is supported in lifecycle.

    Policies related to archiving (AWS Glacier like) are not supported.

    Lifecycle is not supported on file system-enabled buckets.

    GET Bucket policy

    GET Bucket Object versions

    GET Bucket versioning

    HEAD Bucket

    List Multipart Uploads

    PUT Bucket

    Where PUT is performed on an existing bucket, refer to Behavior where bucket already exists.

    PUT Bucket cors

    PUT Bucket acl

    PUT Bucket lifecycle

    Only the expiration part is supported in lifecycle.

    Policies related to archiving (AWS Glacier like) are not supported.

    Lifecycle is not supported on file system-enabled buckets.

    PUT Bucket policy

    Bucket policies cannot be configured for operations that are not supported by cegedim.cloud Object Storage Service.

    PUT Bucket versioning

    DELETE Object

    Delete Multiple Objects

    GET Object

    GET Object ACL

    HEAD Object

    PUT Object

    Supports chunked PUT

    PUT Object acl

    PUT Object - Copy

    OPTIONS object

    Initiate Multipart Upload

    Upload Part

    Upload Part - Copy

    Complete Multipart Upload

    cegedim.cloud Object Storage Service returns an ETag of 00 for this request. This differs from the Amazon S3 response.

    Abort Multipart Upload

    List Parts

    DELETE Bucket tagging

    DELETE Bucket website

    GET Bucket location

    cegedim.cloud Object Storage Service is only aware of a single region.

    GET Bucket logging

    GET Bucket notification

    Notification is only defined for reduced redundancy feature in S3. cegedim.cloud Object Storage Service does not currently support notifications.

    GET Bucket tagging

    400 Bad Request, InvalidBucketName
    GET /?marker=<bucket>&max-keys=<num>
    GET /?marker=mybucket&max-keys=40
    Protecting data with Amazon S3 Object Lock | Amazon Web ServicesAmazon Web Services
    Logo