amazon-archives / data-pipeline-samples

This repository hosts sample pipelines
MIT No Attribution
464 stars 269 forks source link

How to run with standalone #57

Open srikrishnacancun opened 8 years ago

srikrishnacancun commented 8 years ago

If I have my standalone spark cluster with hdfs/yarn configured , What changes are required to run this code?

mbeitchman commented 8 years ago

HI,

Can you tell me which sample you are referring to?

Is your standalone cluster an EMR cluster?

srikrishnacancun commented 8 years ago

Hi

I am referring to HadoopTerasort . Yes I want to run against my own Standalone Spark cluster or Hadoop cluster. What needs to be modified if any to make it work . We want the output to S3 like in the example. How big of a file size we can process.?

https://github.com/awslabs/data-pipeline-samples/tree/master/samples/HadoopTerasort

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:05 PM, Marc Beitchman notifications@github.com wrote:

HI,

Can you tell me which sample you are referring to?

Is your standalone cluster an EMR cluster?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/data-pipeline-samples/issues/57#issuecomment-239233277, or mute the thread https://github.com/notifications/unsubscribe-auth/APLpN9LdcyEkK06hHv_17MUvWbMAt-2Bks5qe11egaJpZM4Jh04Y .

mbeitchman commented 8 years ago

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link for more details.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will depend on your cluster size.

Marc

srikrishnacancun commented 8 years ago

Hi

Thanks for your quick response . Can you explain the high level steps so that I can understand ?

I don't want to use EMR Cluster . I have a stand alone spark cluster 1.6.1 . I want to write the i/p and output to s3

Have nice day.

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:51 PM, Marc Beitchman notifications@github.com wrote:

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link for more details.

http://docs.aws.amazon.com/datapipeline/latest/ DeveloperGuide/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will depend on your cluster size.

Marc

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/data-pipeline-samples/issues/57#issuecomment-239246417, or mute the thread https://github.com/notifications/unsubscribe-auth/APLpN1kvuYKpIuQz-JIgxZKKoZk7ULQdks5qe2gYgaJpZM4Jh04Y .

srikrishnacancun commented 8 years ago

Hi

That means what you are saying you can replace EMR cluster with Physical Server based Spark/Hadoop cluster ? Is that right ?

I am very eager to receive your response.

Thanks

Srikrishna

On Fri, Aug 12, 2016 at 12:03 AM, Srikrishna Parthasarathy < srikrish@cancunsystems.net> wrote:

Hi

Thanks for your quick response . Can you explain the high level steps so that I can understand ?

I don't want to use EMR Cluster . I have a stand alone spark cluster 1.6.1 . I want to write the i/p and output to s3

Have nice day.

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:51 PM, Marc Beitchman <notifications@github.com

wrote:

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link for more details.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuid e/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will depend on your cluster size.

Marc

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/data-pipeline-samples/issues/57#issuecomment-239246417, or mute the thread https://github.com/notifications/unsubscribe-auth/APLpN1kvuYKpIuQz-JIgxZKKoZk7ULQdks5qe2gYgaJpZM4Jh04Y .

mbeitchman commented 8 years ago

yes, that is correct. The task runner is an agent that runs on AWS or on premises resources to execute the activities in the pipeline. The above documentation will explain this in more detail. Please follow up if you have questions once you get started.

srikrishnacancun commented 8 years ago

Hi

Thanks . How do you modify your script/code to include my standalone spark cluster installation to create my custom pipeline .? Can you show the code snippet ?

srikrishna

On Sat, Aug 13, 2016 at 2:40 AM, Marc Beitchman notifications@github.com wrote:

yes, that is correct. The task runner is an agent that runs on AWS or on premises resources to execute the activities in the pipeline. The above documentation will explain this in more detail. Please follow up if you have questions once you get started.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/data-pipeline-samples/issues/57#issuecomment-239560502, or mute the thread https://github.com/notifications/unsubscribe-auth/APLpN8yRWOoCbP6SpbvuPy0faucKDis0ks5qfOE1gaJpZM4Jh04Y .

mbeitchman commented 8 years ago

To connect a Task Runner that you've installed to the pipeline activities it should process, add a workerGroup field to the object, and configure Task Runner to poll for that worker group value. You do this by passing the worker group string as a parameter (for example, --workerGroup=wg-12345) when you run the Task Runner JAR file.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-task-runner-user-managed.html