I need I have a created a pipeline in CDF. One is 'test_full' and another one is 'test_inc' and both are cron scheculed. At designated time 'test_full' getting successed but 'test_inc' getting failed. But If I run manually 'test_inc' then it is getting successed.
Errors from log section:
Based on the error messages, it suggests that the 'test_inc' pipeline is failing during the execution of the workflow, specifically during the destroy lifecycle method in the SmartWorkflow class. This could be due to a variety of reasons, such as issues with resource allocation, dependencies, or the pipeline configuration itself.
Here are a few troubleshooting steps you can take:
Check the Pipeline Configuration: Make sure that the pipeline configuration for 'test_inc' is correct and consistent with 'test_full'. If 'test_full' is working fine, then there might be some configuration issue with 'test_inc'.
Check Dependencies: Ensure that all dependencies required by the 'test_inc' pipeline are available at the time of execution. If there are any external dependencies that are not available during the cron job execution, it could cause the pipeline to fail.
Check Resource Allocation: The 'test_inc' pipeline might require more resources than what's currently allocated. If the pipeline is running fine manually but failing during scheduled runs, it could be due to resource contention with other processes. Check the resource allocation and usage during the scheduled run times.
Check Logs for More Details: The error messages you've provided are quite generic. You should check the detailed logs for the 'test_inc' pipeline. The logs might contain more specific error messages that can help you identify the root cause of the issue.
Check the Destroy Method: The error message suggests that the exception is raised during the destroy lifecycle method in the SmartWorkflow class. If you have any custom code in this method, make sure it's working correctly. The destroy method is called after the pipeline execution, so the issue might be related to cleanup activities.
Hi ,
I am also facing same issue, while running the pipeline preview mode it working with out any error and after deployment while running getting below error message in advanced logs.
Erorr Message -- Exception raised on destroy lifecycle method in class io cdap cdap datapipeline. Smartworktow of the Workflow program of run
program run defaut TestINCR. 99. 22 VS-SNAPSHOT workflow DataPipelineWorkflow.
Best Regards,
Sameer
The error message you're encountering suggests a problem with the 'destroy' lifecycle method in the 'DataPipelineWorkflow' class. This method is in charge of cleaning up resources used by the pipeline, such as temporary files and database connections. If this method encounters an issue, it can lead to the overall failure of the pipeline.
Several potential reasons could cause the 'destroy' method to fail. It could be due to a bug in the 'DataPipelineWorkflow' class itself or an issue with the resources the pipeline is attempting to clean up. For instance, if the temporary files or database connections are no longer accessible, the 'destroy' method will fail.
If you can replicate the error by manually running the pipeline, you can use the Cloud Data Fusion logs to gain more insight into the cause of the failure. These logs will provide the error's stack trace, which can help pinpoint the exact line of code causing the problem.
After identifying the source of the error, you can address the issue and attempt to run the pipeline again. If the error continues, you may need to reach out to Google Cloud support for assistance.
Here are a few additional troubleshooting suggestions:
Specifically, you can: