AWS lambda function to write csv data to an Amazon DynamoDB table

Below is the code to trigger a lambda function when a csv file is uploaded to an Amazon S3 bucket and write data to an Amazon DynamoDB table. I am getting error "expected str, bytes or os.PathLike object, not dict".

Could you please suggest where is the mistake I did.

import boto3
import csv

s3_client = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
#table = dynamodb.Table('batch_data')

def csv_write(table_name, rows):
    table = dynamodb.Table(table_name)

    with table.csv_write() as batch:
        for row in rows:
            batch.put_item(Item=row)
    return True

def read_csv(csv_file, list):
    rows = csv.DictReader(open(csv_file))

    for row in rows:
        list.append(row)

def lambda_handler(event, context):
    try:

        bucket = event['Records'][0]['s3']['bucket']['name']
        csv_file_name = event['Records'][0]['s3']['object']['key']
        response = s3_client.get_object(Bucket=bucket, Key= csv_file_name)

        table_name = 'batch_data'
        items = []

        read_csv(response, items)
        status = csv_write(table_name, items)

        if(status):
            print('Data saved')
        else:
            print('Error in saving data...')
    except Exception as err:
        print (err)
    ```


Read more here: https://stackoverflow.com/questions/67930354/aws-lambda-function-to-write-csv-data-to-an-amazon-dynamodb-table

Content Attribution

This content was originally published by abhishek at Recent Questions - Stack Overflow, and is syndicated here via their RSS feed. You can read the original post over there.

%d bloggers like this: