podman run -ti --name gcloud-config docker.io/google/cloud-sdk gcloud auth application-default login podman run -d --rm \ --env https_proxy=<some_proxy> \ --volumes-from gcloud-config \ -v <some_dir>:<some_dir> \ gcr.io/cloud-ingest/tsop-agent:latest \ --project-id=<some_project_id> \ --hostname=$(hostname) \ --agent-pool=source_agent_pool
The agents do start but they aren't able to connect to the pool. If I see the output of the agent container (using podman attach containerID)
0B/s txSum: 0B taskResps[copy:0 delete:0 list:0] ctrlMsgAge:10m50s (??) |
and agent.INFO logs:
Build target: //cloud/transfer/online/onprem/workers/agent:agent Build id: <some_id> I1222 06:47:51.288924 3 log_spam.go:51] Command line arguments: I1222 06:47:51.288926 3 log_spam.go:53] argv[0]: './agent' I1222 06:47:51.288928 3 log_spam.go:53] argv[1]: '--project-id=<project_id>' I1222 06:47:51.288930 3 log_spam.go:53] argv[2]: '--hostname=<hostname>' I1222 06:47:51.288931 3 log_spam.go:53] argv[3]: '--agent-pool=source_agent_pool' I1222 06:47:51.288933 3 log_spam.go:53] argv[4]: '--container-id=49be0b94bced' I1222 06:47:51.289408 3 prodlayer.go:217] layer successfully set to NO_LAYER with source DEFAULT I1222 06:47:53.148699 3 handler.go:45] TaskletHandler initialized to delete at most 1000 objects in parallel: I1222 06:47:53.148725 3 handler.go:48] TaskletHandler initialized with delete-files: 1024 I1222 06:47:53.148827 3 copy.go:145] TaskletHandler initialized with copy-files: &{0xc00073d2c0 10800000000000} I1222 06:47:53.148860 3 handler.go:61] TaskletHandler initialized to process at most 256 list outputs in parallel: I1222 06:48:51.291680 3 cpuutilization.go:86] Last minute's CPU utilization: 0 I1222 06:49:51.291017 3 cpuutilization.go:86] Last minute's CPU utilization: 0 I1222 06:50:51.290721 3 cpuutilization.go:86] Last minute's CPU utilization: 0 I1222 06:51:51.291057 3 cpuutilization.go:86] Last minute's CPU utilization: 0 I1222 06:52:51.290677 3 cpuutilization.go:86] Last minute's CPU utilization: 0 I1222 06:53:51.290445 3 cpuutilization.go:86] Last minute's CPU utilization: 0
I also checked all the troubleshooting steps here, but couldn't find anything. Is it something to do with using podman instead of docker?
Hi @tarun360,
Welcome to Google Cloud Community!
Thank you
Hello,
Thanks for your reply. I did try running the same commands above with podman, but I am facing the exact same issue. The agent.INFO logs are same as above posted in question. This was the only relevant log I found in GCP logging:
{
"protoPayload": {
"@type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "{my-email-id}",
"principalSubject": "user:{my-email-id}"
},
"requestMetadata": {
"callerIp": "gce-internal-ip",
"callerSuppliedUserAgent": "grpc-go/1.52.0-dev,gzip(gfe)",
"requestAttributes": {
"time": "2023-01-06T05:49:44.260233615Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "pubsub.googleapis.com",
"methodName": "google.pubsub.v1.Subscriber.CreateSubscription",
"authorizationInfo": [
{
"resource": "projects/{my-project-id}/topics/destination_agent_pool-cloud-ingest-control",
"permission": "pubsub.topics.attachSubscription",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/{my-project-id}/topics/destination_agent_pool-cloud-ingest-control",
"request": {
"ackDeadlineSeconds": 10,
"@type": "type.googleapis.com/google.pubsub.v1.Subscription",
"name": "projects/{my-project-id}/subscriptions/destination_agent_pool-cloud-ingest-control-12910427026409884352",
"topic": "projects/{my-project-id}/topics/destination_agent_pool-cloud-ingest-control"
},
"response": {
"name": "projects/{my-project-id}/subscriptions/destination_agent_pool-cloud-ingest-control-12910427026409884352",
"messageRetentionDuration": "604800s",
"topic": "projects/{my-project-id}/topics/destination_agent_pool-cloud-ingest-control",
"@type": "type.googleapis.com/google.pubsub.v1.Subscription",
"pushConfig": {},
"ackDeadlineSeconds": 10
}
},
"insertId": "uiyrc6b1y",
"resource": {
"type": "pubsub_topic",
"labels": {
"project_id": "{my-project-id}",
"topic_id": "projects/{my-project-id}/topics/destination_agent_pool-cloud-ingest-control"
}
},
"timestamp": "2023-01-06T05:49:44.253549825Z",
"severity": "NOTICE",
"logName": "projects/{my-project-id}/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2023-01-06T05:49:47.091556587Z"
}
I also tried the same using glcoud CLI
gcloud transfer agents install --pool=destination_agent_pool --count=1 --mount-directories=/tmp/destination
But I am getting exact same issue as above. The agent.INFO logs are same and the logging on GCP Logs are also same.
Did you resolve this issue?
No! Not able to resolve it!
I have the same issue
I found a workaround. After installing the transfer agents at both source & destination location, start a transfer job. This prompts the agents to make connection with the agent_pool, and the agents indeed get connected and the transfer job succeeds.
This seems like a bug with transfer agents.
Hi tarun360,
it's good to hear, that any workaround has been found. However, if you think that it was a bug, please create bug ticket in IssueTracker. If this is bug, most probably someone will be affected again.
https://issuetracker.google.com/
thanks,
DamianS