I am currently running an ML pipeline on Vertex AI and it is not working as expected, so I am asking this question.
I have the following pipeline defined right now To explain, when I run the pipeline, I want to use timestamp via paramter_values (e.g. 20230714100000) when executing the pipeline. I want the custom training job to output the models in the timestamp directory, and I want the importer to import the models in that directory. However, when I run it, GCS creates a directory named bucket_name/{{channel:task=;name=timestamp;type=String;}}. How can this be avoided?
```
import google.cloud.aiplatform as aip
import kfp
@kfp.dsl.pipeline(name="xxx")
def pipeline(
timestamp: str = "notimestamp"
😞
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.custom_job import \
CustomTrainingJobOp
from kfp.components import importer_node
custom_job_task = CustomTrainingJobOp(
project="xxx",
display_name=f"xxx-{timestamp}",
worker_pool_specs=[
{
"containerSpec": {
"env": [
{"name": "BUCKET_NAME", "value": f"bucket_name/{timestamp}"},
],
"imageUri": "xxx",
},
"replicaCount": "1",
"machineSpec": {
"machineType": "xxx",
"accelerator_type": xxx,
"accelerator_count": 1,
},
}
],
)
importer_node.importer(
artifact_uri=f"bucket_name/{timestamp}",
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": "xxx",
},
},
).after(custom_job_task)
```