Skip to content

flowerpower pipeline Commands

This section details the commands available under flowerpower pipeline.

run

Run a pipeline immediately.

This command executes a pipeline with the specified configuration and inputs. The pipeline will run synchronously, and the command will wait for completion.

Usage

flowerpower pipeline run [options]

Arguments

Name Type Description Default
name str Name of the pipeline to run Required
executor str Type of executor to use Required
base_dir str Base directory containing pipelines and configurations Required
inputs str Input parameters for the pipeline Required
final_vars str Final variables to request from the pipeline Required
config str Configuration for the Hamilton executor Required
cache str Cache configuration for improved performance Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
with_adapter str Configuration for adapters like trackers or monitors Required
max_retries str Maximum number of retry attempts on failure Required
retry_delay str Base delay between retries in seconds Required
jitter_factor str Random factor applied to delay for jitter (0-1) Required

Examples

1
2
3
$ pipeline run my_pipeline

# Run with custom inputs
1
2
3
$ pipeline run my_pipeline --inputs '{"data_path": "data/myfile.csv", "limit": 100}'

# Specify which final variables to calculate
1
2
3
$ pipeline run my_pipeline --final-vars '["output_table", "summary_metrics"]'

# Configure caching
1
2
3
$ pipeline run my_pipeline --cache '{"type": "memory", "ttl": 3600}'

# Use a different executor
1
2
3
$ pipeline run my_pipeline --executor distributed

# Enable adapters for monitoring/tracking
1
2
3
$ pipeline run my_pipeline --with-adapter '{"tracker": true, "opentelemetry": true}'

# Set a specific logging level
1
2
3
$ pipeline run my_pipeline --log-level debug

# Configure automatic retries on failure
$ pipeline run my_pipeline --max-retries 3 --retry-delay 2.0 --jitter-factor 0.2

new

Create a new pipeline structure.

This command creates a new pipeline with the necessary directory structure, configuration file, and skeleton module file. It prepares all the required components for you to start implementing your pipeline logic.

Usage

flowerpower pipeline new [options]

Arguments

Name Type Description Default
name str Name for the new pipeline Required
base_dir str Base directory to create the pipeline in Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
overwrite str Whether to overwrite existing pipeline with the same name Required

Examples

1
2
3
$ pipeline new my_new_pipeline

# Create a pipeline, overwriting if it exists
1
2
3
$ pipeline new my_new_pipeline --overwrite

# Create a pipeline in a specific directory
$ pipeline new my_new_pipeline --base-dir /path/to/project

delete

Delete a pipeline's configuration and/or module files.

This command removes a pipeline's configuration file and/or module file from the project. If neither --cfg nor --module is specified, both will be deleted.

Usage

flowerpower pipeline delete [options]

Arguments

Name Type Description Default
name str Name of the pipeline to delete Required
base_dir str Base directory containing the pipeline Required
cfg str Delete only the configuration file Required
module str Delete only the pipeline module Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required

Examples

1
2
3
$ pipeline delete my_pipeline

# Delete only the configuration file
1
2
3
$ pipeline delete my_pipeline --cfg

# Delete only the module file
$ pipeline delete my_pipeline --module

show_dag

Show the DAG (Directed Acyclic Graph) of a pipeline.

This command generates and displays a visual representation of the pipeline's execution graph, showing how nodes are connected and dependencies between them.

Usage

flowerpower pipeline show_dag [options]

Arguments

Name Type Description Default
name str Name of the pipeline to visualize Required
base_dir str Base directory containing the pipeline Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
format str Output format for the visualization Required

Examples

1
2
3
$ pipeline show-dag my_pipeline

# Generate SVG format visualization
1
2
3
$ pipeline show-dag my_pipeline --format svg

# Get raw graphviz object
$ pipeline show-dag my_pipeline --format raw

save_dag

Save the DAG (Directed Acyclic Graph) of a pipeline to a file.

This command generates a visual representation of the pipeline's execution graph and saves it to a file in the specified format.

Usage

flowerpower pipeline save_dag [options]

Arguments

Name Type Description Default
name str Name of the pipeline to visualize Required
base_dir str Base directory containing the pipeline Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
format str Output format for the visualization Required
output_path str Custom file path to save the output (defaults to pipeline name) Required

Examples

1
2
3
$ pipeline save-dag my_pipeline

# Save in SVG format
1
2
3
$ pipeline save-dag my_pipeline --format svg

# Save to a custom location
$ pipeline save-dag my_pipeline --output-path ./visualizations/my_graph.png

show_pipelines

List all available pipelines in the project.

This command displays a list of all pipelines defined in the project, providing an overview of what pipelines are available to run or schedule.

Usage

flowerpower pipeline show_pipelines [options]

Arguments

Name Type Description Default
base_dir str Base directory containing pipelines Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
format str Output format for the list (table, json, yaml) Required

Examples

1
2
3
$ pipeline show-pipelines

# Output in JSON format
1
2
3
$ pipeline show-pipelines --format json

# List pipelines from a specific directory
$ pipeline show-pipelines --base-dir /path/to/project

show_summary

Show summary information for one or all pipelines.

This command displays detailed information about pipelines including their configuration, code structure, and project context. You can view information for a specific pipeline or get an overview of all pipelines.

Usage

flowerpower pipeline show_summary [options]

Arguments

Name Type Description Default
name str Name of specific pipeline to summarize (all if not specified) Required
cfg str Include configuration details Required
code str Include code/module details Required
project str Include project context information Required
base_dir str Base directory containing pipelines Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required
to_html str Generate HTML output instead of text Required
to_svg str Generate SVG output (where applicable) Required
output_file str File path to save the output instead of printing to console Required

Examples

1
2
3
$ pipeline show-summary

# Show summary for a specific pipeline
1
2
3
$ pipeline show-summary --name my_pipeline

# Show only configuration information
1
2
3
$ pipeline show-summary --name my_pipeline --cfg --no-code --no-project

# Generate HTML report
$ pipeline show-summary --to-html --output-file pipeline_report.html

add_hook

Add a hook to a pipeline configuration.

This command adds a hook function to a pipeline's configuration. Hooks are functions that are called at specific points during pipeline execution to perform additional tasks like logging, monitoring, or data validation.

Usage

flowerpower pipeline add_hook [options]

Arguments

Name Type Description Default
name str Name of the pipeline to add the hook to Required
function_name str Name of the hook function (must be defined in the pipeline module) Required
type str Type of hook (determines when the hook is called during execution) Required
to str Target node or tag (required for node-specific hooks) Required
base_dir str Base directory containing the pipeline Required
storage_options str Options for storage backends Required
log_level str Set the logging level Required

Examples

1
2
3
$ pipeline add-hook my_pipeline --function log_results

# Add a pre-run hook
1
2
3
$ pipeline add-hook my_pipeline --function validate_inputs --type PRE_RUN

# Add a node-specific hook (executed before a specific node runs)
1
2
3
$ pipeline add-hook my_pipeline --function validate_data --type NODE_PRE_EXECUTE --to data_processor

# Add a hook for all nodes with a specific tag
$ pipeline add-hook my_pipeline --function log_metrics --type NODE_POST_EXECUTE --to @metrics