Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PutObject fails with SignatureDoesNotMatch following release of 1.36 #4400

Open
1 task done
davidjmemmett opened this issue Jan 17, 2025 · 17 comments
Open
1 task done
Assignees
Labels
bug This issue is a confirmed bug. investigating This issue is being investigated and/or work is in progress to resolve the issue. p1 This is a high priority issue potential-regression Marking this issue as a potential regression to be checked by team member s3

Comments

@davidjmemmett
Copy link

Describe the bug

We're seeing intermittent signature errors when using boto3==1.36.x and botocore==1.36.x with S3 transfer acceleration enabled. Either disabling transfer acceleration or downgrading to boto3==1.35.99 allows the upload to succeed.

botocore.exceptions.ClientError: An error occurred (SignatureDoesNotMatch) when calling the PutObject operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

Regression Issue

  • Select this option if this issue appears to be a regression.

Expected Behavior

PutObject call succeeds without error.

Current Behavior

Intermittent SignatureDoesNotMatch errors

Reproduction Steps

client = boto3_session.client(
    service_name="s3",
    config=Config(
        s3={
            "use_accelerate_endpoint": True,
        },
        signature_version='s3v4',
    ),
)

client.upload_file(
    local_path, 
    bucket_name,
    upload_path, 
    ExtraArgs={
        "ContentType": mimetype,
    },
)

Possible Solution

No response

Additional Information/Context

Looking at the announcement regarding integrity checks in 1.36.x, it looks as though this may be related.

SDK version used

1.36.x

Environment details (OS name and version, etc.)

Python 3.9.4 on Ubuntu 20.04.6 LTS

@davidjmemmett davidjmemmett added bug This issue is a confirmed bug. needs-triage This issue or PR still needs to be triaged. labels Jan 17, 2025
@github-actions github-actions bot added the potential-regression Marking this issue as a potential regression to be checked by team member label Jan 17, 2025
@davidjmemmett
Copy link
Author

It looks like somebody else has hit the same issue: awslabs/amazon-sns-python-extended-client-lib#20

@hbjydev
Copy link

hbjydev commented Jan 17, 2025

We're running into this issue ourselves, too. Could someone take a look or at least point us at the right place to go fiddle with it? :)

@WangFirefly
Copy link

i have the same problem:

  • An error occurred (ContentSHA256Mismatch) when calling the PutObject operation: The provided content-sha256 does not match what was computed.

  • and it is ok with boto3==1.33.1 .

  • I have test three kind of s3 cloud.

  1. ali oss is ok with 1.36.0,
  2. huaweicloud obs and bytedance tos have above problem.

@jonathan343
Copy link
Contributor

jonathan343 commented Jan 17, 2025

Hey @davidjmemmett, thanks for bringing this issue to our attention!

We're currently investigating this behavior and hoping to get more information about your request.

Can you provide debug logs for the request you're making? The following example shows how to enable them:

import boto3
boto3.set_stream_logger("")

Is there a specific region you're seeing these SignatureDoesNotMatch errors?

@RyanFitzSimmonsAK RyanFitzSimmonsAK added s3 p1 This is a high priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Jan 17, 2025
@RyanFitzSimmonsAK RyanFitzSimmonsAK self-assigned this Jan 17, 2025
@RyanFitzSimmonsAK RyanFitzSimmonsAK added the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Jan 17, 2025
@oliverhaas
Copy link

Another workaround seems to be to revert to signature version "s3"/"v2" instead of "s3v4" if this one is still available for the bucket or third party party s3 provider.

@eyurchuk
Copy link

eyurchuk commented Jan 19, 2025

This issue is related to new parameters of the Config, which are not yet supported by third-party S3-compatible providers.
As stated in Reference:

request_checksum_calculation (str) –
Determines when a checksum will be calculated for request payloads. Valid values are:

when_supported – When set, a checksum will be calculated for all request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true or a requestAlgorithmMember is modeled.
when_required – When set, a checksum will only be calculated for request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true or where a requestAlgorithmMember is modeled and supplied.
Defaults to None.

response_checksum_validation (str) –
Determines when checksum validation will be performed on response payloads. Valid values are:

when_supported – When set, checksum validation is performed on all response payloads of operations modeled with the httpChecksum trait where responseAlgorithms is modeled, except when no modeled checksum algorithms are supported.
when_required – When set, checksum validation is not performed on response payloads of operations unless the checksum algorithm is supported and the requestValidationModeMember member is set to ENABLED.

Changing your config to:

s3_client = boto3.client(
     's3',
     endpoint_url=os.getenv('S3_ENDPOINT'),
     config=Config(request_checksum_calculation="when_required", response_checksum_validation="when_required")
 )

fixes the problem.

@jonathan343 this issue seems to be related to third-party S3 providers only.

@davidjmemmett
Copy link
Author

@eyurchuk this is untrue, I'm using pure AWS resources.

@davidjmemmett
Copy link
Author

Hey @davidjmemmett, thanks for bringing this issue to our attention!

We're currently investigating this behavior and hoping to get more information about your request.

Can you provide debug logs for the request you're making? The following example shows how to enable them:

import boto3
boto3.set_stream_logger("")
Is there a specific region you're seeing these SignatureDoesNotMatch errors?

I'm happy to provide redacted logs privately. I will try to get in touch via our AWS accounts team.

@davidjmemmett
Copy link
Author

I might have found another clue - in the debug logs, it indicated that boto3 was correcting the region for the bucket early on in the request. After explicitly setting the region_name for the S3 client, I am unable to reproduce the error.

Omitting the region_name causes intermittent failures - sometimes it works, sometimes it doesn't work.

@9128305
Copy link

9128305 commented Jan 21, 2025

It does not work for me: setting region_name, request_checksum_calculation and response_checksum_validation. Still the same error. (its a 3rd party cloud provider)

@jonathan343
Copy link
Contributor

Hey, just wanted to provide some clarification here related to the original issue described by @davidjmemmett.

The SignatureDoesNotMatch error you're receiving from Amazon S3 is a server-side issue we've made the related team aware of and they're currently investigating the issue. This issue is specific to S3 buckets with transfer acceleration enabled and requests made using "use_accelerate_endpoint": True.

This issue exists in versions of boto3 < 1.36.0 which you can reproduce by manually setting a checksum in an older version of the SDK:

import boto3
boto3.set_stream_logger("")
from botocore.config import Config


client = boto3.client(
    "s3",
    config=Config(
        s3={
            "use_accelerate_endpoint": True,
        },
    ),
)

client.upload_file(
    "aws-example-file.txt", 
    "aws-example-bucket",
    "aws-example-file.txt",
    ExtraArgs={
        "ChecksumAlgorithm": "CRC32"
    }
)

However, because of the changes to default checksum behavior in boto3>=1.36.0, this issue has become more common.

I might have found another clue - in the debug logs, it indicated that boto3 was correcting the region for the bucket early on > in the request. After explicitly setting the region_name for the S3 client, I am unable to reproduce the error.

Omitting the region_name causes intermittent failures - sometimes it works, sometimes it doesn't work.

This is interesting behavior you're seeing here. Botocore has custom S3 redirect logic that attempts to re-route your request to the proper region of your bucket if initially configured incorrectly. For debugging purposes, can you can clarify which region your bucket exists in and what you're configuring your client to be in both cases you described?
Note: If not set explicitally, the SDK will use what you have configured on the AWS_REGION environment variable or in your shared aws config file (~/.aws/config).

@davidjmemmett
Copy link
Author

Thank you for following this up @jonathan343 - I'll give it a whirl tomorrow.

Regarding the regional workaround; I'm using AWS IAM SSO, and the profile I was using to test was configured to use us-west-2 by default, whereas the bucket was in on of the eu-west regions.

Cheers,
David

@jonathan343
Copy link
Contributor

Regarding the regional workaround; I'm using AWS IAM SSO, and the profile I was using to test was configured to use us-west-2 by default, whereas the bucket was in on of the eu-west regions.

Got it. When looking into this, I was able to reproduce in us-west-2 and not us-east-1, so the issue is inconsistent between regions. I'll follow up once I have information from S3. Thanks again for bringing this issue to our attention!

@jonathan343
Copy link
Contributor

Also, because you're using a high-level S3 operation (upload_file), setting the request_checksum_calculation config to when_required won't resolve the issue for the reason mentioned in boto/s3transfer#327.

I have PRs open with the fix and will work on getting them released asap:

Until then the best way to mitigate this issue is to do one of the following:

  • Pin to a version of boto3 < 1.36.0
  • Use the low-level put_object operation with request_checksum_calculation configured to when_required.

@davidjmemmett
Copy link
Author

Looking at those change requests, will those effectively disable the new checksum functionality? That's not what I think should be the case, I want checksums to happen to validate what I uploaded.

@jonathan343
Copy link
Contributor

jonathan343 commented Jan 21, 2025

Starting in boto3-1.36.0, the request_checksum_calculation config is set to when_supported in all AWS SDKs and AWS CLI (see announcement for more details). This means SDKs will now compute checksums by default for service operations that model support for them.

If users want to disable default checksum behavior, the when_required config value can be set for request_checksum_calculation. This works as expected for low-level operations, however, for high-level operations the when_required currently doesn't work as intended. The PRs I linked simply add the ability for customers (like those who commented in this issue) who use third-party S3 compatible services to disable this functionality using the when_required config if needed.

See the Data Integrity Protections for Amazon S3 guide for more details.

Sorry for the confusion. Please let me know if I can clear anything up.

@9128305
Copy link

9128305 commented Jan 22, 2025

Checked boto/s3transfer#329, thanks, it fixes at least my problems with 3rd party providers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a confirmed bug. investigating This issue is being investigated and/or work is in progress to resolve the issue. p1 This is a high priority issue potential-regression Marking this issue as a potential regression to be checked by team member s3
Projects
None yet
Development

No branches or pull requests

8 participants