Closed krishnamee2004 closed 6 years ago
@jethrolam can you talk about how you did modify import foo
thing, because I have the same exact problem, I do import boto3
in a local file, and it broke everything
In my case it is not import boto3
but boto3.client('s3')
that broke the test. My solution was just to move the latter from module level into function level so it would not be called at all during import foo
.
@jethrolam Can you run your import foo
from within def test_thing():
? I'm wondering if that is a better fix than making a new boto3 client each time.
Hi,
I am getting the following error when I was writing tests to mock SQS queue using moto.
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the CreateQueue operation: The security token included in the request is invalid.
Has anyone seen this before? I'm using:
moto==1.3.8 boto3==1.9.134 botocore==1.12.134
Appreciate your inputs. Thanks!
@kadusumilli1 You'll need to have a fixture or an equivalent that mocks out the AWS credentials via an environment variable, like so: https://github.com/Netflix-Skunkworks/swag-api/blob/master/swag_api/tests/conftest.py#L16-L22
You will need to have that before you declare your mocks.
I solved it using
botocore==1.12.86
git+https://github.com/spulec/moto.git@df493ea18de22b4533073eee3e296686019911c2
Installing via commit till moto>1.3.8
I'm using identical code to what was running yesterday, and am now seeing this issue:
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the CreateBucket operation: The AWS Access Key Id you provided does not exist in our records.
I've tried with decorator, with 'with', and with mock.start(), and each time (when running in pytest) i get an error:
@pytest.fixture(scope="function", autouse=True) def create_buckets(): """Creates some mock s3 buckets before the start of every test""" mock = mock_s3() print(mock) mock.start() conn = boto3.resource("s3", region_name=settings.REGION_NAME) conn.create_bucket(Bucket=settings.RUN_BUCKET) conn.create_bucket(Bucket=settings.INGESTION_BUCKET) s3 = boto3.client("s3", region_name=settings.REGION_NAME) response = s3.list_buckets()
print("Existing buckets:")
for bucket in response["Buckets"]:
print(f' {bucket["Name"]}')
mock.stop()
def create_buckets():
"""Creates some mock s3 buckets before the start of every test"""
mock = mock_s3()
mock.start()
s3 = boto3.resource('s3')
> s3.create_bucket(Bucket=settings.RUN_BUCKET)
conftest.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python3.6/site-packages/boto3/resources/factory.py:520: in do_action
response = action(self, *args, **kwargs)
venv/lib/python3.6/site-packages/boto3/resources/action.py:83: in __call__
response = getattr(parent.meta.client, operation_name)(**params)
venv/lib/python3.6/site-packages/botocore/client.py:357: in _api_call
return self._make_api_call(operation_name, kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <botocore.client.S3 object at 0x1151529e8>
operation_name = 'CreateBucket'
api_params = {'Bucket': 'example-local-product-bucket'}
def _make_api_call(self, operation_name, api_params):
operation_model = self._service_model.operation_model(operation_name)
service_name = self._service_model.service_name
history_recorder.record('API_CALL', {
'service': service_name,
'operation': operation_name,
'params': api_params,
})
if operation_model.deprecated:
logger.debug('Warning: %s.%s() is deprecated',
service_name, operation_name)
request_context = {
'client_region': self.meta.region_name,
'client_config': self.meta.config,
'has_streaming_input': operation_model.has_streaming_input,
'auth_type': operation_model.auth_type,
}
request_dict = self._convert_to_request_dict(
api_params, operation_model, context=request_context)
service_id = self._service_model.service_id.hyphenize()
handler, event_response = self.meta.events.emit_until_response(
'before-call.{service_id}.{operation_name}'.format(
service_id=service_id,
operation_name=operation_name),
model=operation_model, params=request_dict,
request_signer=self._request_signer, context=request_context)
if event_response is not None:
http, parsed_response = event_response
else:
http, parsed_response = self._make_request(
operation_model, request_dict, request_context)
self.meta.events.emit(
'after-call.{service_id}.{operation_name}'.format(
service_id=service_id,
operation_name=operation_name),
http_response=http, parsed=parsed_response,
model=operation_model, context=request_context
)
if http.status_code >= 300:
error_code = parsed_response.get("Error", {}).get("Code")
error_class = self.exceptions.from_code(error_code)
> raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the CreateBucket operation: The AWS Access Key Id you provided does not exist in our records.
I've run into the same problem (moto not actually mocking the requests), upgrading to 1.3.10
did the trick.
blindly upgraded boto3/moto, and just accidentally deleted buckets from my aws account trying to run testcases locally where mock_s3()
was being used... (luckily I noticed some funky bucket names and managed to ctrl-c before too much damage was done...
:(
reminder to self, use switchrole and don't have your default aws access account have access to anything valuable..
I think I'll be migrating my testcases from using/calling moto's mock_s3()
directly in my testcases and use a localstack (which includes moto) container. At least that way I can set the endpoint_url
and be certain (?) that boto3 isn't accessing my real account...
It think we should add some documentation to the readme to help clarify how to avoid issues similar to @monkut 's. It's not great to hear that your unit tests manipulated your real environment, and that is a major problem.
I'm a big fan of pytest and pytest fixtures. For all of my moto tests, I have a conftest.py
file where I define the following fixtures:
@pytest.fixture(scope='function')
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
os.environ['AWS_SECURITY_TOKEN'] = 'testing'
os.environ['AWS_SESSION_TOKEN'] = 'testing'
@pytest.fixture(scope='function')
def s3(aws_credentials):
with mock_s3():
yield boto3.client('s3', region_name='us-east-1')
@pytest.fixture(scope='function')
def sts(aws_credentials):
with mock_sts():
yield boto3.client('sts', region_name='us-east-1')
@pytest.fixture(scope='function')
def cloudwatch(aws_credentials):
with mock_cloudwatch():
yield boto3.client('cloudwatch', region_name='us-east-1')
... etc.
All of the AWS/mocked fixtures take in a parameter of aws_credentials
, which sets the proper fake environment variables -- which is needed. Then, for when I need to do anything with the mocked AWS environment, I do something like:
def test_create_bucket(s3):
# s3 is a fixture defined above that yields a boto3 s3 client.
# Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
s3.create_bucket(Bucket="somebucket")
result = s3.list_buckets()
assert len(result['Buckets']) == 1
assert result['Buckets'][0]['Name'] == 'somebucket'
Taking this approach works for all of my tests. I have had some issues with Tox and Travis CI occasionally -- and typically I need to do something along the lines of touch ~/.aws/credentials
for those to work. However, using the latest moto, boto3, and the fixtures I have above seems to always work for me without issues.
Also Protip:
This might be controversial, but I always make use of in-function import
statements for my unit tests. This ensures that the mocks are run before any of the actual code is.
Example:
def test_something(s3):
from some.package.that.does.something.with.s3 import some_func # <-- In function import for unit test
# ^^ Importing here ensures that the mock has been established.
sume_func() # The mock has been established from the fixture, so this function that uses
# an S3 client will properly use the mock and not reach out to AWS.
It think we should add some documentation to the readme to help clarify how to avoid issues similar to @monkut 's. It's not great to hear that your unit tests manipulated your real environment, and that is a major problem.
I'm a big fan of pytest and pytest fixtures. For all of my moto tests, I have a
conftest.py
file where I define the following fixtures:@pytest.fixture(scope='function') def aws_credentials(): """Mocked AWS Credentials for moto.""" os.environ['AWS_ACCESS_KEY_ID'] = 'testing' os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing' os.environ['AWS_SECURITY_TOKEN'] = 'testing' os.environ['AWS_SESSION_TOKEN'] = 'testing' @pytest.fixture(scope='function') def s3(aws_credentials): with mock_s3(): yield boto3.client('s3', region_name='us-east-1') @pytest.fixture(scope='function') def sts(aws_credentials): with mock_sts(): yield boto3.client('sts', region_name='us-east-1') @pytest.fixture(scope='function') def cloudwatch(aws_credentials): with mock_cloudwatch(): yield boto3.client('cloudwatch', region_name='us-east-1') ... etc.
All of the AWS/mocked fixtures take in a parameter of
aws_credentials
, which sets the proper fake environment variables -- which is needed. Then, for when I need to do anything with the mocked AWS environment, I do something like:def test_create_bucket(s3): # s3 is a fixture defined above that yields a boto3 s3 client. # Feel free to instantiate another boto3 S3 client -- Keep note of the region though. s3.create_bucket(Bucket="somebucket") result = s3.list_buckets() assert len(result['Buckets']) == 1 assert result['Buckets'][0]['Name'] == 'somebucket'
Taking this approach works for all of my tests. I have had some issues with Tox and Travis CI occasionally -- and typically I need to do something along the lines of
touch ~/.aws/credentials
for those to work. However, using the latest moto, boto3, and the fixtures I have above seems to always work for me without issues.
Finally!
I would say, that people shouldn't be afraid to run unit tests that use moto risking their real AWS envs being modified. I don't suspect that there are some folks that would like to run tests that use both real AWS and still mock some bits of AWS. What I'm trying to say is that currently to use moto requires a lot of attention and even that may go in vain with a newer version of AWS libs or moto. Maybe we should all together put some pressure on Amazon so they expose some mocking points to make everyone's life easier? Amazon has their feedback page IIRC.
I can still reproduce the issue with S3 and SQS tests on
boto3 = "==1.9.189" botocore = "==1.12.189" moto = "==1.3.13"
Same with moto = "==1.3.10"
@yitzikc Are your mocks running before your code is executed?
I just added a Protip to my post above, which should assist users when designing their tests with moto.
I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it'll help some people :)
I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it'll help some people :)
Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.
I just made a PR to introduce more AWS Config features (#2363), and I updated the readme with the wording in my post above. Please review and let me know if there is anything else I should add: https://github.com/spulec/moto/blob/1c268e3580b0976c9867b50124f665320c188148/README.md#very-important----recommended-usage
I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it'll help some people :)
Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.
Changing import order still not working to me 😢 . Using same version as boto3 = "==1.9.189" botocore = "==1.12.189" moto = "==1.3.13" / "==1.3.10"
The problem is not the import order -- it's the order upon which a boto client is instantiated. If it is instantiated BEFORE a mock is established, it won't work.
Please review: https://github.com/spulec/moto#very-important----recommended-usage
Hi guys, not sure if I understand everything correctly, but I allowed myself to report an issue to boto3 to make mocking easier - can anyone from moto core team comment on https://github.com/boto/boto3/issues/2123 - maybe there's something boto team could do to avoid such problems? This bug is not the only one reported to moto - some of those issues are more than 1 year old and people still have problems that tests hit AWS servers.
I'm actually curious if I permanently fixed this issue #2578
Can you all verify if the latest master
fixes this issue? The changes in #2578 seems like it should fix this issue once and for all.
@mikegrima, Thank you very much for your responses. i have below exception even after mocking my dynamodb with pytest fixtures.
[CPython37:setup:stdout] > raise error_class(parsed_response, operation_name)
[CPython37:setup:stdout] E botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the DescribeTable operation: The security token included in the request is invalid.
[CPython37:setup:stdout]
This is my source code file named db_utils.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
from dynamodb_json import json_util as db_json
import boto3
from botocore.exceptions import ClientError
from .converters import DecimalEncoder
class DynamoDbOperations:
"""
Class to perform dynamo_db common operations.
By default connects to us-west-2 (Oregon) region.
"""
def __init__(self, table_name: str, region_name: str = 'us-west-2'):
dynamo_db_resource = boto3.resource('dynamodb', region_name=region_name)
# Use client to handle exceptions and resolve the error.
dynamo_db_client = boto3.client('dynamodb', region_name=region_name)
try:
self._table = dynamo_db_resource.Table(table_name)
print("{} table created on {}".format(table_name, self._table.creation_date_time))
except dynamo_db_client.exceptions.ResourceNotFoundException as e:
# Log error
print("Error: {}".format(e))
def insert_item(self, json_item):
"""
Inserts item into dynamo_db table
:param json_item: Item as Json object
:return:
"""
if type(json_item) is not dict:
raise ValueError("Insert Item: {} must be json object".format(json_item))
try:
response = self._table.put_item(Item=json_item)
except ClientError as ce:
print(ce.response['Error']['Message'])
else:
print("PutItem succeeded")
clean_response = db_json.loads(response)
print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
return clean_response
def get_item(self, primary_key: dict):
"""
Get item from table based on primary_key
:param primary_key: Dictionary of partition_key and sort_key(optional)
:return: Returns json object with primary key
"""
if type(primary_key) is not dict:
raise ValueError("primary_key: {} must be dictionary \
of partition_key and sort_key(optional)".format(primary_key))
try:
response = self._table.get_item(Key=primary_key)
except ClientError as ce:
print(ce.response['Error']['Message'])
else:
item = response['Item']
print("GetItem succeeded")
clean_response = db_json.loads(item)
print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
return clean_response
def modify_item(self, primary_key: dict,
update_expression: str,
expression_attribute_values: dict,
condition_expression: str = None,
return_values: str = "UPDATED_NEW"
):
"""
Update/Modify item based on primary_key.
You can update values of existing attributes, add new attributes, or remove attributes,
More info:
Update Expression: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html
Condition Expression: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html#Expressions.ConditionExpressions.SimpleComparisons
:param primary_key: Dictionary of partition_key and sort_key(optional)
:param update_expression: An update expression consists of one or more clauses.
Each clause begins with a SET, REMOVE, ADD, or DELETE keyword. You can include any of these clauses
in an update expression, in any order. However, each action keyword can appear only once
:param condition_expression: To perform a conditional update
:param expression_attribute_values: Attribute values to be updated/add/delete
:param return_values: Return type after performing update
:return: Returns updated json attributes
"""
if type(primary_key) is not dict:
raise ValueError("primary_key: {} must be dictionary \
of partition_key and sort_key(optional)".format(primary_key))
if 0 < len(primary_key) > 2:
raise Exception("primary_key: {} must contain \
partition_key and sort_key(optional) only".format(primary_key))
if type(expression_attribute_values) is not dict:
raise ValueError("expression_attribute_values: {} must be dictionary".format(expression_attribute_values))
try:
if condition_expression:
response = self._table.update_item(
Key=primary_key,
UpdateExpression=update_expression,
ConditionExpression=condition_expression,
ExpressionAttributeValues=expression_attribute_values,
ReturnValues=return_values
)
else:
response = self._table.update_item(
Key=primary_key,
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values,
ReturnValues=return_values
)
except ClientError as e:
if e.response['Error']['Code'] == "ConditionalCheckFailedException":
print(e.response['Error']['Message'])
else:
raise
else:
print("UpdateItem succeeded:")
clean_response = db_json.loads(response)
print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
return clean_response
def delete_item(self, primary_key:dict,
expression_attribute_values: dict = None,
condition_expression: str = None):
"""
Deletes an item from table
:param primary_key: Dictionary of partition_key and sort_key(optional)
:param expression_attribute_values: Items with matching Attribute values to be deleted
:param condition_expression: To perform a conditional delete
:return:
"""
if type(primary_key) is not dict:
raise ValueError("primary_key: {} must be dictionary \
of partition_key and sort_key(optional)".format(primary_key))
if condition_expression:
if expression_attribute_values is None:
raise ValueError("expression_attribute_values: {} \
must be provided".format(expression_attribute_values))
elif type(expression_attribute_values) is not dict:
raise ValueError("expression_attribute_values: {} must be a dictionary")
try:
if condition_expression:
response = self._table.delete_item(
Key=primary_key,
ConditionExpression=condition_expression,
ExpressionAttributeValues=expression_attribute_values
)
else:
response = self._table.delete_item(Key=primary_key)
except ClientError as ce:
if ce.response['Error']['Code'] == "ConditionalCheckFailedException":
print(ce.response['Error']['Message'])
else:
raise
else:
print("DeleteItem succeeded:")
clean_response = db_json.loads(response)
print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
return clean_response
here is my test file
from db_utils import DynamoDbOperations
from moto import mock_dynamodb2
import pytest
import boto3
import os
TEST_DYNAMO_TABLE_NAME = 'test'
@pytest.fixture(scope='function')
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
os.environ['AWS_SECURITY_TOKEN'] = 'testing'
os.environ['AWS_SESSION_TOKEN'] = 'testing'
os.environ['AWS_DEFAULT_REGION'] = 'us-west-2'
@pytest.fixture
def dynamo_db_table(aws_credentials):
def _table(table_name):
with mock_dynamodb2():
boto3.client('dynamodb').create_table(
AttributeDefinitions=[
{'AttributeName': 'id', 'AttributeType': 'S'}
],
TableName=f'{table_name}',
KeySchema=[{'AttributeName': 'id', 'KeyType': 'HASH'}],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5,
},
)
yield boto3.resource('dynamodb').Table(f'{table_name}')
yield _table
def test_dynamo_db_utils_init(dynamo_db_table):
DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)
def test_dynamo_db_utils_insert_item(dynamo_db_table):
json_item = {
'id': '123',
'name': 'karthik'
}
dynamo_db_table(TEST_DYNAMO_TABLE_NAME)
db = DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)
response = db.insert_item(json_item)
assert 200 in response
Anything i'm missing here. Appreciate your help.
I used decorator around
@mock_dynamodb2
def test_dynamo_db_utils_init(dynamo_db_table):
DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)
The above invalid security token went off. but now i see
test_dynamo_db_utils.py Error: An error occurred (ResourceNotFoundException) when calling the DescribeTable operation: Requested resource not found
Any idea how to mock this line
print("{} table created on {}".format(table_name, self._table.creation_date_time))
in fixtures
Can we mock self._table.creation_date_time
???
Hi @karthikvadla, where in your test are you calling the 'DescribeTable' operation, i.e. on which line is it failing? I can't see it in the code you provided
The creation time can be accessed like this:
table_description = conn.describe_table(TableName=name)
created_on = table_description["Table"]["CreationDateTime"]
Hi all im getting the same issue reported above. any advice would be greatly appreciated:
from moto import mock_s3
import boto3
import pytest
import os
@pytest.fixture(scope='function')
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
os.environ['AWS_SECURITY_TOKEN'] = 'testing'
os.environ['AWS_SESSION_TOKEN'] = 'testing'
@pytest.fixture(scope='function')
def s3(aws_credentials):
with mock_s3():
yield boto3.client('s3', region_name='us-east-1')
def test_create_bucket(s3):
# s3 is a fixture defined above that yields a boto3 s3 client.
# Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
s3.create_bucket(Bucket="somebucket")
result = s3.list_buckets()
assert len(result['Buckets']) == 1
assert result['Buckets'][0]['Name'] == 'somebucket'
$ pytest moto.py
output:
______ERROR collecting moto.py _________________
ImportError while importing test module '/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
moto.py:1: in <module>
from moto import mock_s3
E ImportError: cannot import name 'mock_s3' from partially initialized module 'moto' (most likely due to a circular import) (/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py)
versions:
______ERROR collecting moto.py _________________ ImportError while importing test module '/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: moto.py:1: in <module> from moto import mock_s3 E ImportError: cannot import name 'mock_s3' from partially initialized module 'moto' (most likely due to a circular import) (/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py)
@drewmullen The name of your test file conflicts with the moto package. Rename your file to something other than
moto.py
.
Edit: I think I just need to be more careful with import order
Is this meant to be fixed? I'm seeing what looks to be a similar issue trying to mock out DynamoDB calls.
moto==1.3.14 boto==2.49.0 boto3==1.14.7 botocore==1.17.7
Partial example:
def setup_table():
ddb_client = boto3.client('dynamodb')
ddb_client.create_table(
AttributeDefinitions=[
{'AttributeName': 'email', 'AttributeType': 'S'},
{'AttributeName': 'timestamp', 'AttributeType': 'S'},
],
TableName='contact-form-submissions',
KeySchema=[
{'AttributeName': 'email', 'KeyType': 'HASH'},
{'AttributeName': 'timestamp', 'KeyType': 'RANGE'},
],
ProvisionedThroughput={'ReadCapacityUnits': 1, 'WriteCapacityUnits': 1},
)
@mock_dynamodb2
def test_save_to_db():
setup_table()
result = save_to_db(DATA)
assert result is True
I'm getting an error:
botocore.errorfactory.ResourceInUseException: An error occurred (ResourceInUseException) when calling the CreateTable operation: Table already exists: contact-form-submissions
And it looks like that table is getting created on real AWS, not mocked as I'd expected. Am I doing something wrong or is there a regression?
This is happening for me with moto 1.3.16 - botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the Publish operation: No account found for the given parameters
rolling back and using 1.13.10 seems the resolve the issue
Hi @mickog, this might fix your issue: https://github.com/spulec/moto/blob/master/README.md#very-important----recommended-usage
Cheers @bblommers will take a look at refactoring my tests,
The link from @bblommers in the comment above is now contained in the documentation here : https://docs.getmoto.org/en/latest/docs/getting_started.html#recommended-usage
Test cases written in moto makes actual AWS API calls to botocore instead of mocking them. This happens with the latest version of boto3 (1.8.). It used to work fine without issues with 1.7. versions.
Sample code to reproduce error
Expected result
Method should return the
ListBuckets
response. It should look something like:Actual error
Full stack trace
Library versions