# Object Storage - Get started

## First steps

### Create an Object Store

Connect to ITCare and access the Storage section from the main menu on the left. Click on "***Create an Object Store***"

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-9_13-52-4.png?version=1&#x26;modificationDate=1667998324216&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

Select The "**Data Center**" (also named *Region*) that will own your **Object Store**:

{% hint style="info" %}
When **Geo-Replication** is **Enabled,** Objects are available on both endpoint.
{% endhint %}

{% hint style="danger" %}
The **Geo Replication** can't be **Enabled** or **Disabled** once the **Object Store** is created.
{% endhint %}

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-7_8-56-12.png?version=1&#x26;modificationDate=1667807772383&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

Search and select a "**Global Service**":

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-9_14-19-59.png?version=1&#x26;modificationDate=1667999999382&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

Enter a name to your **Object Store** (see [Limitation & Best Practices#Limitations](https://docs.cegedim.cloud/display/OST/Limitation+and+Best+Practices#LimitationandBestPractices-Limitations))

You can also set a "***Quota***". The quota can be changed at any time after the creation of the **Object Store**.

{% hint style="info" %}
Your Object Store name will be prefixed by "cos" + the ***Cloud Name*** of the service global selected : **`cos-<cloud_name>-<Your Object Store name>`**

Example: **`cos-cegedimit-hello`**
{% endhint %}

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-7_8-58-41.png?version=1&#x26;modificationDate=1667807921892&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

The last step the summary of your **Object Store** creation request.

You check if information are correct. Click on the "**Submit**" button to launch the creation.

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-10_8-38-6.png?version=1&#x26;modificationDate=1668065885844&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

Once the creation is done (it can take few minutes), a pop-up appear, displaying your credentials and available endpoints for your **Object Store**:

#### **Credentials**

* User Name → is your **access\_key**
* Password → is your **secret\_key**

{% hint style="danger" %}
Keep your **secret\_key** safe, it will not be displayed anymore.
{% endhint %}

You have the possibility to regenerate, see [manage-object-users](https://academy.cegedim.cloud/storage/object-storage/object-storage-get-started/manage-object-users "mention").

#### **Endpoints**

* If you selected a Data center with **Geo-Replication** **enabled** (Step 2), you will have **2 endpoints**, one for each Data Centers.
* If you selected a Data center with **Geo-Replication disabled** (Step 2), you will have only **1 endpoint**, corresponding to the Data Center selected.

<figure><img src="https://docs.cegedim.cloud/download/attachments/101716614/image2022-11-10_8-45-38.png?version=1&#x26;modificationDate=1668066338450&#x26;api=v2&#x26;effects=drop-shadow" alt=""><figcaption></figcaption></figure>

### Manage an Object Store

This page show detailed information about your **Object Store**:

* The service global which the Object Store is part of
* The data center where the Object Store is located
* Global size and numbers of object
* Quota status
* Object users

You also to possibility to manage:

* Quota
* Object Users
* Delete the Object Store

<figure><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-b91b4e9b242de782d021297b9dbff33691866814%2Fimage2022-11-10_11-20-1.jpg?alt=media" alt=""><figcaption></figcaption></figure>

### Create a Bucket

You have created your **Object Store**, it's time now to create a **Bucket**:

{% hint style="info" %}
We use **aws s3** and **aws s3api** command line tools from AWSCLIv2 on Linux.

`${S3_ENDPOINT}` and`${S3_PROFILE}` are environment variables.
{% endhint %}

Use the command "**`mb`**" to create a **Bucket**:

```bash
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 mb s3://my-bucket

# Output
make_bucket: my-bucket
```

List Buckets :

```bash
# List buckets
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls
 
# Output
2022-11-13 11:43:54 my-bucket
```

### Upload an Objet

We have now a **Bucket**, let's upload objects in it:

{% hint style="info" %}
We use **aws s3** and **aws s3api** command line tools from AWSCLIv2 on Linux.

`${S3_ENDPOINT}` and`${S3_PROFILE}` are environment variables.
{% endhint %}

Use the command "**`cp`**" to upload an object:

```bash
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 cp feather.ttf s3://my-bucket
# Output
upload: ./feather.ttf to s3://my-bucket/feather.ttf
 
# List content of the bucket
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://my-bucket
 
# Output
2022-11-13 11:47:42 81512 feather.ttf
```

You can specify a **prefix** when you upload an object to a Bucket:

```bash
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 cp feather.ttf s3://my-bucket/prefix-1/prefix-2/
 
# Output
upload: ./feather.ttf to s3://my-bucket/prefix-1/prefix-2/feather.ttf
 
# List content of the bucket
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://my-bucket/
 
# Output
PRE prefix-1/
 
# List content of the bucket at prefix-1
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 ls s3://my-bucket/prefix-1/
 
# Output
PRE prefix-2/
```

### Manage Object Store Quota

You can specified a Quota on your **Object Store** in order to limit space used.

{% hint style="warning" %}
When the Quota is reached, upload in buckets in the **Object Store** **are denied**.
{% endhint %}

To manage **Quota**, go the detailed information page of your Object Store and click on the **Manage quota** button.

<figure><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-c3c0fe68ca0cb173cb68e848f3ac91dd77ea14ef%2Fimage2022-11-13_11-56-35.png?alt=media" alt=""><figcaption></figcaption></figure>

You can set **Quota** from **1Gb** up to **8Tb.**

{% hint style="info" %}
If you need more than 8Tb **Quota**, please contact cegedim.cloud.
{% endhint %}

Once Quota applied, on can you follow the status of the Quota on the detailed information page of your **Object Store**:

<figure><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-d5e98e638aa1ffbe6adb6fea70abf4a6d2fcad24%2Fimage2022-11-13_12-2-39.png?alt=media" alt=""><figcaption></figcaption></figure>

When the **Quota** limit is reached, upload is denied (**HTTP 403 Forbidden**):

{% code overflow="wrap" %}

```bash
aws --endpoint-url=${S3_ENDPOINT} --profile=${S3_PROFILE} s3 cp s3://my-bucket/hello/
```

{% endcode %}

{% code title="Output" overflow="wrap" %}

```
upload failed: ./feather.ttf to s3://my-bucket/hello/feather.ttf An error occurred (Forbidden) when calling the PutObject operation: Check if quota has been exceeded or object has too many versions
```

{% endcode %}

## Recommended Clients <a href="#title-text" id="title-text"></a>

### S3Browser

**S3 Browser** is a freeware for Windows desktops (only). It offers basic functionalities with free version. For advanced features, the [pro version](https://s3browser.com/buypro.aspx) must be acquired.

{% embed url="<http://s3browser.com/>" %}

### AWS CLI

**AWS CLI** is the official AWS command line interface. It offers all functionalities and best performance to use with cegedim.cloud Object Storage Service.

{% embed url="<https://aws.amazon.com/cli/?nc1=h_ls>" %}

### S5cmd

**s5cmd** is an alternative to **aws cli**, is a very fast S3 and local filesystem execution tool.

{% embed url="<https://github.com/peak/s5cmd>" %}

### Software Development Kit (SDK) <a href="#recommendedclients-softwaredevelopmentkit-sdk" id="recommendedclients-softwaredevelopmentkit-sdk"></a>

We recommend to use official AWS SDK:

* **Java** : <https://aws.amazon.com/documentation/sdkforjava/>
* **.NET** : <https://aws.amazon.com/documentation/sdkfornet/>
* **Node.js** : <https://aws.amazon.com/documentation/sdk-for-javascript/>
* **Python** : <https://boto3.readthedocs.org/en/latest/>

{% embed url="<https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html>" %}

## Resilience and DRP <a href="#title-text" id="title-text"></a>

If you are using a **Geo-Replicated** **Object Store**, your objects will be available on both regions of cegedim.cloud Object Storage Service.

Best practice would be to use a client with built-in capability to switch over the two regions, and then:

* Automatically fail-over to second region when primary is down
* Have no configuration to do to enable fail-over of your application in another region

<details>

<summary>Java pseudo code</summary>

{% code lineNumbers="true" %}

```java
/**
  Class responsible to deliver a valid s3 client to other components of application.
  It holds s3 client instances and has ability to test if client is available for each region.
**/
class S3ClientFactory {
    private List<AmazonS3Client> clients;
 
    public void init() {
        // initialization of your clients here
    }
 
    /**
    returns a valid s3 client or throw exception
    **/
    public AmazonS3Client getClient() throws ServiceUnavailableException {
        for (AmazonS3Client client : clients) {
            if (client.isAvailable()) {
                return client;
            }
        }
        throw new ServiceUnavailableException("No S3 client is currently available");
    }
 
}
 
class MyApplication {
     
    S3ClientFactory factory = new S3ClientFactory();
 
    // get a client, whatever the region is
    AmazonS3Client client = factory.getClient();
 
    client.putObject("mybucket", "path/to/object", "One more");
 
}
```

{% endcode %}

</details>

## Emptying a Bucket <a href="#title-text" id="title-text"></a>

{% hint style="danger" %}

* Emptying a Bucket is **irreversible.**
* Deleted Objects or Buckets **can't be restored.**
* Delete operation **can take time** depending on the number of objects and versions stored in the bucket.
  {% endhint %}

### Using S3 CLI

You can use any S3 CLI tools, like AWS CLI, [s3cmd](https://s3tools.org/s3cmd) or [s3browser](https://s3browser.com/).

You can empty a bucket using this kind of tools only **if the bucket does not have "Versioning enabled"**. (See [manage-versioning-in-bucket](https://academy.cegedim.cloud/storage/object-storage/object-storage-get-started/manage-versioning-in-bucket "mention"))

If versioning is not enabled, you can use the **rm** (remove) command with the `--recursive` parameter to empty the bucket (or remove a subset of objects with a specific key name prefix).

The following `rm` command removes objects that have the key name prefix doc, for example, `doc/doc1` and `doc/doc2`.

```bash
$ aws s3 rm s3://bucket-name/doc --recursive
```

Use the following command to remove all objects without specifying a prefix.

```bash
$ aws s3 rm s3://bucket-name --recursive
```

{% hint style="warning" %}
You can't remove objects from a bucket that **has versioning enabled**. S3 adds a delete marker when you delete an object, which is what this command does.

See [bucket-lifecycle](https://academy.cegedim.cloud/storage/object-storage/object-storage-features/bucket-lifecycle "mention") for more information.
{% endhint %}

{% hint style="danger" %}
You can't remove objects with **Object Lock** enable until the retention period defined is reached.
{% endhint %}

### Using a lifecycle configuration

If you use a **lifecycle** configuration to empty your bucket, the **lifecycle** configuration should include:

* [current versions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/versioning-workflows.html)
* [non-current versions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/versioning-workflows.html)
* [delete markers](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html)
* [incomplete multipart uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpu-abort-incomplete-mpu-lifecycle-config.html)

You can add **lifecycle** configuration rules to expire all objects or a subset of objects that have a specific key name prefix. For example, to remove all objects in a bucket, you can set a **lifecycle** rule to expire objects one day after creation.

If your bucket **has versioning enabled**, you can also configure the rule to expire **non-current objects**.

To fully empty the contents of a versioning enabled bucket, you will need to configure an expiration policy on **both current and non-current** objects in the bucket.

You can add **lifecycle** policy on the bucket using AWS CLI or GUI Client like [s3browser](https://s3browser.com/bucket-lifecycle-configuration.aspx).

{% hint style="info" %}
For more information about lifecycle configuration, see [bucket-lifecycle](https://academy.cegedim.cloud/storage/object-storage/object-storage-features/bucket-lifecycle "mention").
{% endhint %}

Find below some lifecycle policies to empty bucket:

<details>

<summary>Non versioning Bucket</summary>

```json
{
    "Rules": [
        {
            "Expiration": {
                "Days": 1
            },
            "ID": "lifecycle-v2-expire-current-and-mpu",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled",
            "AbortIncompleteMultipartUpload": {
                "DaysAfterInitiation": 1
            }
        }
    ]
}
```

</details>

<details>

<summary>Versioning Bucket</summary>

```json
{
    "Rules": [
        {
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "ID": "lifecycle-v2-expire-non-current-and-dmarkers-and-mpu",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled",
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 1
            },
            "AbortIncompleteMultipartUpload": {
                "DaysAfterInitiation": 1
            }
        }
    ]
}
```

</details>

<details>

<summary>Both versioning and non versioning Bucket</summary>

Below an example of **lifecycle** configuration to emptying a Bucket, handle versioning **Enable** and **Disable:**

```json
{
    "Rules": [
        {
            "Expiration": {
                "Days": 1
            },
            "ID": "lifecycle-v2-purge-all",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled",
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 1
            },
            "AbortIncompleteMultipartUpload": {
                "DaysAfterInitiation": 1
            }
        },
        {
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled"
        }
    ]
}
```

</details>
