
AWS Data Pipeline - data automation
Hello! 
Today Amazon Web Services has released a new service that works with other services, namely it can help in transferring data between:
AWS Data Pipeline allows you to copy, transfer data from SQL and DynamoDB tables to S3 and vice versa.


By default, Data Pipeline provides several templates:
You can also come up with all sorts of other methods for using Data Pipeline.

The process is easily configured using a graphical interface. You drag elements, set parameters and so on.
An AWS official blog post introduces an example Data Pipeline setup.
What comes to mind when asked “Why?”. With Data Pipeline, you can easily configure backups. Moreover, even backups from external database servers to S3, for example.
Let's share ideas on how we can use these mechanisms. Maybe you have questions about the possibilities? Let's look for answers together!

Today Amazon Web Services has released a new service that works with other services, namely it can help in transferring data between:
- S3
- MySQL RDS / External MySQL Servers
- Dynamodb
AWS Data Pipeline allows you to copy, transfer data from SQL and DynamoDB tables to S3 and vice versa.


By default, Data Pipeline provides several templates:
- Export from DynamoDB to S3
- Export from S3 to DynamoDB
- Copy from S3 to RDS
- Copy from RDS to S3
- File Analysis in S3
- Migrating from non-RDS MySQL to S3
You can also come up with all sorts of other methods for using Data Pipeline.

The process is easily configured using a graphical interface. You drag elements, set parameters and so on.
An AWS official blog post introduces an example Data Pipeline setup.
What comes to mind when asked “Why?”. With Data Pipeline, you can easily configure backups. Moreover, even backups from external database servers to S3, for example.
Let's share ideas on how we can use these mechanisms. Maybe you have questions about the possibilities? Let's look for answers together!