Shrinking indices in Elasticsearch

Optimizing Elasticsearch: How Many Shards per Index? | Qbox HES

The Problem

Today, we started receiving the following error from our production Elasticsearch cluster when a new index was about to be created:

  "error": {
    "root_cause": [
        "type": "validation_exception",
        "reason": "Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"
    "type": "validation_exception",
    "reason": "Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"
  "status": 400

The error description was obvious that we would breach the shard limit of 1,000 when creating a new index.

Confirming the number from the error message using _cat/shards endpoint, we see that we had 991 shards in our only data node.

$ curl -s https://<aws_es_url> | wc -l

We had about 99 indices and each index had 5 shards with one replica which contributes to 5 shards as well. So a total of 10 shards per index. We can confirm that by checking the index endpoint:

$ curl -s https://<aws_es_url><index_name>?pretty"

which shows the following output (shortened for brevity):

  "settings": {
    "number_of_shards": "5",
    "number_of_replicas": "1"

Looking around in AWS help docs, they have suggested three solutions:

Suggested fixes

The 7.x versions of Elasticsearch have a default setting of no more than 1,000 shards per node. Elasticsearch throws an error if a request, such as creating a new index, would cause you to exceed this limit. If you encounter this error, you have several options:

  • Add more data nodes to the cluster.
  • Increase the _cluster/settings/cluster.max_shards_per_node setting.
  • Use the _shrink API to reduce the number of shards on the node.

We chose the shrink option because all our indices are small enough that they do not need 5 shards.

How to Shrink?

It is a 3 step process:

Step 1: Block writes on the current index

$ curl -XPUT -H 'Content-Type: application/json' https://<aws_es_url><current_index_name>/_settings -d'{
  "settings": {
    "index.number_of_replicas": 0,                                
    "index.routing.allocation.require._name": "shrink_node_name", 
    "index.blocks.write": true                                    

Step 2: Start shrinking with the new shard count

$ curl -XPOST -H 'Content-Type: application/json' https://<aws_es_url><current_index_name>/_shrink/<new_index_name> -d'{
  "settings": {
    "index.number_of_replicas": 1,
    "index.number_of_shards": 1, 
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null

You can track the progress of the shrinking via the /_cat/recovery endpoint. Once the shrinking is complete, you can verify the document count via the _cat/indices endpoint.

Once you are happy with the shrinking, go to the next step.

Step 3: Delete the old index

$ curl -XDELETE https://<aws_es_url><current_index_name>

You can run the above commands for multiple indices through a shell script like below (place the index names in /tmp/indices.txt as one index name per line):

while read source; do
   <curl command>
done </tmp/indices.txt

Permanent Fix

All the above 3 steps only fixes the existing indices. We’ll need to make some code changes to ensure new indices created from now on is also created with the new setting of one shard.

Include settings.number_of_shards and settings.number_of_replicas in the request payload along with mappings when creating a new index. PHP code for reference:

    'mappings' => [
        'properties' => [
    'settings' => [
        'number_of_shards' => 1,
        'number_of_replicas' => 1,

You are now done! 👏

You have successfully fixed both existing indices and new indices.

Further Reading


Static Websites with AWS CloudFront and S3

Amazon S3 + Amazon CloudFront: A Match Made in the Cloud | Networking &  Content Delivery

Why CloudFront & S3 is better for hosting static sites?

AWS CloudFront is a CDN that can be used to serve static HTML sites backed by S3 storage.

S3 storage is very cheap. Combined with CloudFront, you can make your sites serve in low latency speeds.

Deploy to S3 and start CloudFront cache invalidation

Why automated deployment?

Automated deployments allows your changes to be made available on the live site instantly and automatically whenever you push to the repository.

You can forget about copying the code manually and uploading to your S3 bucket.

Why trigger cache invalidation?

Once the files are copied to S3, we need to trigger an invalidation for the cache in CloudFront. Otherwise, CloudFront will continue to server the old content from its cache.

We’ll see examples and code snippets for both Bitbucket and Github below.

Using Bitbucket Pipelines

Pipelines is Bitbucket’s CI/CD tool.

Steps to setup deployment:

  • Create a file named bitbucket-pipelines.yml in the project directory.
  • Paste the below code in the file and give your AWS_Access_key_ID, AWS_Secret_access_key, $CloudFront_Distribution_Id values through
  • After adding your AWS key save and push your changes into the bitbucket repo, it is done!
image: node:10.15.0

    - step:
        name: Deploy to S3
        deployment: production
          - pipe: atlassian/aws-s3-deploy:0.4.4
              AWS_ACCESS_KEY_ID: $AWS_Access_key_ID
              AWS_SECRET_ACCESS_KEY: $AWS_Secret_access_key
              AWS_DEFAULT_REGION: 'us-east-1'
              S3_BUCKET: ''
              ACL: 'public-read'
    - step:
        name: Invalidate CloudFront cache
          - pipe: atlassian/aws-cloudfront-invalidate:0.3.3
              AWS_ACCESS_KEY_ID: $AWS_Access_key_ID
              AWS_SECRET_ACCESS_KEY: $AWS_Secret_access_key
              AWS_DEFAULT_REGION: 'us-east-1'
              DISTRIBUTION_ID: $CloudFront_Distribution_Id

You can add the variables directly in the YML file but it is recommended to add them through ‘Repository variables’ under Pipelines Settings in Repository settings:

Using GitHub Actions

Actions is GitHub’s CI/CD tool.

Steps to setup deployment:

  • Create a file deploy-to-s3.yml in the project directory.
  • Add the required variables to secrets
  • After adding your secrets, push your changes into the Github repo and see the magic!
name: Deploy Website
      - master
    runs-on: ubuntu-latest
      - uses: actions/checkout@v1
      - name: Deploy to S3
        uses: jakejarvis/s3-sync-action@master
          args: '--acl public-read --delete'
          AWS_S3_BUCKET: '${{ secrets.AWS_PRODUCTION_BUCKET_NAME }}'
          AWS_ACCESS_KEY_ID: '${{ secrets.AWS_ACCESS_KEY_ID }}'
          AWS_REGION: '${{ secrets.AWS_REGION }}'
          SOURCE_DIR: build
      - name: Invalidate CloudFront Cache
        uses: awact/cloudfront-action@master
          SOURCE_PATH: ./public
          AWS_REGION: us-east-1
          AWS_ACCESS_KEY_ID: '${{ secrets.AWS_ACCESS_KEY_ID }}'
          DISTRIBUTION_ID: '${{ secrets.DISTRIBUTION_ID }}'

Now that you know how to automatically deploy to S3 on every change to the repository, you can learn some two tricks about routing options with S3 hosting.

Route pages without index.html filename suffix

By default, CloudFront expects the full URL to match with the actual path in S3. For example, if you have a path <bucket_url>/<subdirectory>/index.html as below, you will need to open in your browser to open the file.


If we try to open the URL without index.html suffix, we will get AccessDenied error like below.

But there is a small trick that we can apply to open just from your browser and let CloudFront/S3 serve the index.html automatically.

From an SEO perspective, it is also good to strip out the HTML suffix anyway, because the overall URL length is smaller which search engines like.

So here is the trick, the Origin Domain Name has to be updated in CloudFront in the following format:


Click ‘Edit’:


Change the Origin Domain Name from to (based on your region name)

Once the CloudFront Distribution is updated after the above settings change, try opening the subdirectory URL directly and it will work:

Route pages without *.html filename suffix

It is also possible to route a file such as <bucket_path>/page.html to an URL Yet again, this is beneficial for SEO purposes and your URL looks neat.

To get this working, follow these two steps:

  1. Remove .html suffix/extension from your original file either before uploading to S3 or after.
  2. Override the metadata Content-type of this file to text/html. Select the file -> Actions -> Change Metadata. (The default value for files without an extension is binary/octet-stream which triggers a download when opened from the browser)

Now you should be able to access your URL without the HTML suffix:

If you have more than one file to update the metadata or if you are deploying from Bitbucket Pipelines or Github Actions, you can automate this update on Metadata. Example step:

    - step:
        name: Update Metadata on HTML files
        image: fuinorg/atlassian-default-image-awscli:latest
          - >-
            while read file; do
              AWS_ACCESS_KEY_ID=$AWS_Access_key_ID AWS_SECRET_ACCESS_KEY=$AWS_Secret_access_key AWS_DEFAULT_REGION=us-east-1 aws s3 cp --content-type="text/html" --metadata-directive="REPLACE" --acl=public-read s3://<bucket>/$file s3://$file
            done <$BITBUCKET_CLONE_DIR/files_to_update_in_s3.txt

files_to_update_in_s3.txt should contain the list of files that needs to be metadata set. You can generate this file dynamically locally using gulp.

The doctype has to be set correctly in the HTML file for this to work. Example:

<!DOCTYPE html>
<html lang="en">
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
Hello world

Note there is a little difference between the previous subdirectory approach and this one. We will have a trailing slash in the earlier approach and no slash in this one.