Scylla Manager backup dry run fails with: giving up after 2 attempts: after 30s: context deadline exceeded

We are setting up scylladb cluster to be backed up in AWS S3 compatible storage and setup access_key, secet, endpoint in scylla-manager-agent.yaml correctly. From each node the command also works fine:

scylla-manager-agent check-location --debug --location s3:

But when we try and test backup from scyllamanger node it fails with:

Error: get backup target: location is not accessible

sctool backup -c ‘Scylla Prod 01’ -L ‘s3:’ --dry-run

NOTICE: this may take a while, we are performing disk size calculations on the nodes

2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded
2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded

May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.365Z","N":"backup","M":"Generating backup target","cluster_id":"fe769e49-2639-4426-8829-77d5ccc20ae2","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Checking hosts connectivity","hosts":["2a0b:b580:0:0:xx:xxxx:xx:xx","22a0b:b580:0:0:xx:xxxx:xx:xx","2a0b:b580:0:0:xx:xxxx:xx:xx","2a0b:b580:0:0:xx:xxxx:xx:xx","2a0b:b580:0:0:xx:xxxx:xx:xx"],"_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Host check OK","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Host check OK","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Host check OK","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Host check OK","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Host check OK","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"cluster.client","M":"Done checking hosts connectivity","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:47 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:47.594Z","N":"backup","M":"Checking accessibility of remote locations","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:08:59 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:59.832Z","N":"cluster","M":"Creating new Scylla REST client","cluster_id":"fe769e49-2639-4426-8829-77d5ccc20ae2"}
May 08 13:08:59 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:08:59.884Z","N":"cluster.client","M":"Measuring datacenter latencies","dcs":["SCY-PRD1"]}
May 08 13:09:17 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:17.596Z","N":"cluster.client","M":"HTTP retry backoff","operation":"OperationsCheckPermissions","wait":"1s","error":"after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:29 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:29.832Z","N":"cluster","M":"Creating new Scylla REST client","cluster_id":"fe769e49-2639-4426-8829-77d5ccc20ae2"}
May 08 13:09:29 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:29.890Z","N":"cluster.client","M":"Measuring datacenter latencies","dcs":["SCY-PRD1"]}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Location check FAILED","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","location":"s3:<Bucket Name>","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Location check FAILED","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","location":"s3:<Bucket Name>","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Location check FAILED","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","location":"s3:<Bucket Name>","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Location check FAILED","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","location":"s3:<Bucket Name>","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Location check FAILED","host":"2a0b:b580:0:0:xx:xxxx:xx:xx","location":"s3:<Bucket Name>","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"backup","M":"Done checking accessibility of remote locations","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:48 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:48.597Z","N":"http","M":"GET /api/v1/cluster/Scylla%20Prod%2001/tasks/backup/target","from":"127.0.0.1:59140","status":400,"bytes":566,"duration":"61235ms","error":"get backup target: location is not accessible: 2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded; 2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded; 2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded; 2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded; 2a0b:b580:0:0:xx:xxxx:xx:xx: giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"L9hVLdVmQBCl40aj9f0zkQ"}
May 08 13:09:59 prd-scylladbmanager scylla-manager[954628]: {"L":"INFO","T":"2023-05-08T13:09:59.832Z","N":"cluster","M":"Creating new Scylla REST client","cluster_id":"fe769e49-2639-4426-8829-77d5ccc20ae2"}



The scylladb node manager-agent has these errors:

May 08 13:09:17 prd-scylladb1 scylla-manager-agent[3135]: {"L":"ERROR","T":"2023-05-08T13:09:17.596Z","N":"rclone","M":"Location check: error=no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.rcCheckPermissions\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rc.go:497\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:258\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2152\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:49\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2947\nnet/http.(*conn).serve\n\tnet/http/server.go:1991"}
May 08 13:09:17 prd-scylladb1 scylla-manager-agent[3135]: {"L":"ERROR","T":"2023-05-08T13:09:17.596Z","N":"rclone","M":"rc: \"operations/check-permissions\": error: no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.writeError\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:81\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:265\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2152\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:49\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2947\nnet/http.(*conn).serve\n\tnet/http/server.go:1991"}
May 08 13:09:17 prd-scylladb1 scylla-manager-agent[3135]: {"L":"INFO","T":"2023-05-08T13:09:17.596Z","N":"http","M":"POST /agent/rclone/operations/check-permissions","from":"[2a0b:b580::10:121:64:10]:59866","status":404,"bytes":150,"duration":"30001ms"}

@roy.susmit Thank you for reporting this issue.

Please find the scylla-manager git repository GitHub - scylladb/scylla-manager: The Scylla Manager
That’s the best place to create issues related to scylla manger.

I miss a bit of the information here ,like:

  • what version of scylla-manager are you using
  • what is the output of scylla-manager-agent check-location -L s3:<location>

The error message you attached: error: no put permission: context canceled is misleading in the context of the issue reported here.

The real root cause is the timeout that comes from the scylla-manager. When you perform sctool backup -c ‘Scylla Prod 01’ -L ‘s3:’ --dry-run then it’s the scylla-manager that calls scylla-manager-agent REST API to check the permissions.
Scylla manager calls to scylla-manager-agent REST API are timed out after 30s.
Scylla-manager-agent tried to interact with S3 API to put a file into the bucket, but the call took more than 30s and was basically cancelled.

scylla-manager-agent check-location --debug --location s3: calls are not timed out. They are performed directly on the manager-agent nodes without anything in between the caller and the manager-agent.
Pls let us know how much time it took to check the location from the node directly.

I suspect that the S3 compatible storage response time is > 30s.

Hello @Karol_Kokoszka
I wanted to know if you can help I am facing issue testing the dry run backup to azure blob storage
the VMs are on prem so I used storage account name and key to update the yaml config file

I have 3 nodes

I get this error when I run the
sctool backup -c mycluster -L ‘mybackupname’ --dry-run

Error: create backup target: location is not accessible
** MYIP: giving up after 2 attempts: agent [HTTP 500] init location: Failed to acquire MSI token: MSI is not enabled on this VM: Get “http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com”: dial tcp 169.254.169.254:80: connect: no route to host - make sure the location is correct and credentials are set, to debug SSH to MYIP and run “scylla-manager-agent check-location -L MYbackupstorage --debug”**

when i run this scylla-manager-agent check-location -L MYbackupstorage --debug on all of the nodes it returned fine also when I run the scylla-manager-agent check-location -L azure: on all nodes it returns fine too without any errors

the debug on each node returns good ending with the deletion of the test as below on all nodes
{“L”:“DEBUG”,“T”:“2023-10-17T13:13:02.875+0100”,“N”:“rclone”,“M”:“Waiting for deletions to finish”}
{“L”:“DEBUG”,“T”:“2023-10-17T13:13:03.229+0100”,“N”:“rclone”,“M”:“test: Deleted”}

Please how can I resolve the challenge of performing the dry run backup test

I also tried to do an adhoc backup same error too
Error: create backup target: location is not accessible
** MYIP: giving up after 2 attempts: agent [HTTP 500] init location: Failed to acquire MSI token: MSI is not enabled on this VM: Get “http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com”: dial tcp 169.254.169.254:80: connect: no route to host - make sure the location is correct and credentials are set, to debug SSH to MYIP and run “scylla-manager-agent check-location -L MYbackupstorage --debug”**

Please help

@anyone

We are also facing the similar problem while running below dry run command

sctool backup -c prod-cluster --retention 7 -i 1h -K 'keyspace.*,!keyspace.data_*' -L 's3:scylla-manager-backup' --rate-limit 1000   --dry-run

from scylla manager got Error: get backup target: location is not accessible , got below logs in scylla manager

Oct 19 16:42:52 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:52.285+0530","N":"backup","M":"Generating backup target","cluster_id":"76f3615e-8738-490c-b324-0310b7d0e6a1","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:57 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:57.376+0530","N":"cluster.client","M":"Checking hosts connectivity","hosts":["172.31.129.20","172.31.129.29","172.31.129.69"],"_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:57 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:57.379+0530","N":"cluster.client","M":"Host check OK","host":"172.31.129.69","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:57 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:57.383+0530","N":"cluster.client","M":"Host check OK","host":"172.31.129.20","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:58 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:58.811+0530","N":"cluster.client","M":"Host check OK","host":"172.31.129.29","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:58 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:58.811+0530","N":"cluster.client","M":"Done checking hosts connectivity","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:58 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:58.811+0530","N":"backup","M":"Checking accessibility of remote locations","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:42:59 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:42:59.150+0530","N":"backup","M":"Location check OK","host":"172.31.129.20","location":"s3:scylla-manager-backup","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:43:24 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:43:24.909+0530","N":"backup","M":"Location check OK","host":"172.31.129.29","location":"s3:scylla-manager-backup","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:43:28 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:43:28.813+0530","N":"cluster.client","M":"HTTP retry backoff","operation":"OperationsCheckPermissions","wait":"1s","error":"after 30s: context deadline exceeded","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:43:59 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:43:59.814+0530","N":"backup","M":"Location check FAILED","host":"172.31.129.69","location":"s3:scylla-manager-backup","error":"giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:43:59 ip-172-31-129-6 scylla-manager[1167760]: {"L":"ERROR","T":"2023-10-19T16:43:59.814+0530","N":"backup","M":"Failed to access location from node","node":"172.31.129.69","location":"s3:scylla-manager-backup","error":"172.31.129.69: giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA","errorStack":"github.com/scylladb/scylla-manager/v3/pkg/service/backup.(*Service).checkHostLocation\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/backup/service.go:379\ngithub.com/scylladb/scylla-manager/v3/pkg/service/backup.(*Service).checkLocationsAvailableFromNodes.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/backup/service.go:350\ngithub.com/scylladb/scylla-manager/v3/pkg/util/parallel.Run.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/util/parallel/parallel.go:72\nruntime.goexit\n\truntime/asm_arm64.s:1172\n","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/service/backup.(*Service).checkLocationsAvailableFromNodes.func2\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/backup/service.go:359\ngithub.com/scylladb/scylla-manager/v3/pkg/util/parallel.Run.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/util/parallel/parallel.go:79"}
Oct 19 16:43:59 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:43:59.814+0530","N":"backup","M":"Done checking accessibility of remote locations","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}
Oct 19 16:43:59 ip-172-31-129-6 scylla-manager[1167760]: {"L":"INFO","T":"2023-10-19T16:43:59.814+0530","N":"http","M":"GET /api/v1/cluster/prod-cluster-latest/tasks/backup/target","from":"127.0.0.1:46360","status":400,"bytes":190,"duration":"67535ms","error":"get backup target: location is not accessible: 172.31.129.69: giving up after 2 attempts: after 30s: context deadline exceeded","_trace_id":"taeySVdaTsGxxAZ8Xfb0JA"}

while at node scylla-manager-agent we got below error

Oct 19 16:44:38 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:44:38.725+0530","N":"rclone","M":"Location check: error=no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.rcCheckPermissions\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rc.go:498\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:260\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2165\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:50\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:44:38 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:44:38.726+0530","N":"rclone","M":"rc: \"operations/check-permissions\": error: no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.writeError\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:81\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:267\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2165\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:50\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:44:38 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:44:38.726+0530","N":"http","M":"POST /agent/rclone/operations/check-permissions","from":"172.31.129.6:60880","status":500,"bytes":150,"duration":"99913ms","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\nmain.(*logEntry).Write\n\tgithub.com/scylladb/scylla-manager/v3/pkg/cmd/agent/log.go:53\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:54\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:58\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:44:51 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:44:51.206+0530","N":"rclone","M":"rc: \"operations/movefile\": error: object not found","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.writeError\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:81\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:267\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2165\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:50\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:44:51 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:44:51.206+0530","N":"http","M":"POST /agent/rclone/operations/movefile","from":"172.31.129.6:56044","status":404,"bytes":589,"duration":"102589ms","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\nmain.(*logEntry).Write\n\tgithub.com/scylladb/scylla-manager/v3/pkg/cmd/agent/log.go:53\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:54\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:58\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:45:11 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:45:11.376+0530","N":"rclone","M":"Location check: error=no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.rcCheckPermissions\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rc.go:498\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:260\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2165\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:50\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}
Oct 19 16:45:11 ip-172-31-129-69 scylla-manager-agent[1074874]: {"L":"ERROR","T":"2023-10-19T16:45:11.376+0530","N":"rclone","M":"rc: \"operations/check-permissions\": error: no put permission: context canceled","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone.RedirectLogPrint.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/logger.go:19\ngithub.com/rclone/rclone/fs.LogPrintf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:152\ngithub.com/rclone/rclone/fs.Errorf\n\tgithub.com/rclone/rclone@v1.51.0/fs/log.go:167\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.writeError\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:81\ngithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver.Server.ServeHTTP\n\tgithub.com/scylladb/scylla-manager/v3/pkg/rclone/rcserver/rcserver.go:267\nnet/http.StripPrefix.func1\n\tnet/http/server.go:2165\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:70\ngithub.com/go-chi/chi/v5.(*Mux).Mount.func1\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:311\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/scylladb/scylla-manager/v3/pkg/auth.ValidateToken.func1.1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/auth/auth.go:50\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/chain.go:31\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:436\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5/middleware.RequestLogger.func1.1\n\tgithub.com/go-chi/chi/v5@v5.0.0/middleware/logger.go:57\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2122\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/v5@v5.0.0/mux.go:87\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2936\nnet/http.(*conn).serve\n\tnet/http/server.go:1995"}

In cluster of 3 node, 2 are working fine and getting issue only on one node.
All three nodes have same permission, security group , IAM role .

scylla-manager-agent check-location --debug --location s3:scylla-manager-backup command works fine on all three nodes.

Here main challenge is that we are not able to identify why this issue is coming, as scylla-manager to scylla-manager-agent is kind of blackbox for me.

scylla-manager version : 3.2.3
scylla-manager-agent version : 3.2.3

Please help use fix the issue,
Thanks

Hey @sumohx,

You wrote:

when i run this scylla-manager-agent check-location -L MYbackupstorage --debug on all of the nodes it returned fine also when I run the scylla-manager-agent check-location -L azure: on all nodes it returns fine too without any errors

Did you restart the scylla-manager-agent service after applying changes to the YAML configuration ?
check-location started with scylla-manager-agent CLI is reading the configuration when it’s executed, but when you do the actual backup, it’s the scylla-manager-agent.service that validates the location. If the service is not restarted, then it keeps the old configuration in memory.

@Vikram_Pratap_Singh
Please double check that you restarted the scylla-manager-agent.service on all nodes after you set up the backup location.

1 Like
Hello @Karol_Kokoszka  Yes restarted sudo systemctl start scylla-manager-agent on all nodes. still error persist when I run the dry run 

swuser@scylladb-uatmon:~$ sctool backup -c TestCluster -L 'azure:scylla-db-backup-test' --dry-run
NOTICE: this may take a while, we are performing disk size calculations on the nodes
Error: get backup target: location is not accessible
 172.26.30.22: giving up after 2 attempts: agent [HTTP 500] init location: Failed to acquire MSI token: MSI is not enabled on this VM: Get "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com": dial tcp 169.254.169.254:80: connect: no route to host - make sure the location is correct and credentials are set, to debug SSH to 172.26.30.22 and run "scylla-manager-agent check-location -L azure:scylla-db-backup-test --debug"
Trace ID: OWeEOQzwQVSVUwdvMLWkMA (grep in scylla-manager logs)

@sumohx Did you use use_msi parameter in you scylla-manager-agent.yaml file ?
The error “Failed to acquire MSI token: MSI is not enabled on this VM” may appear when this option is enabled for azure storage.

Hello @Karol_Kokoszka

Thank you for your response. No I didn’t use the use_msi in the yaml file. Infact there is no such parameter

There is no such a parameter on guide we provide, but there is the 3rd part lib RClone under the hood that supports it and the param can be provided in scylla-manager-agent.yaml. By using this parameter, user indicates that would like to use MSI authentication on Azure.

Anyway, you didn’t use it, so seems that my suspicion is wrong.

Is there a chance to see the scylla-manager-agent config YAML file ?

Here…

# Scylla Manager Agent config YAML

# Specify authentication token, the auth_token needs to be the same for all the
# nodes in a cluster. Use scyllamgr_auth_token_gen to generate the auth_token
# value.
auth_token: XXCXKOUS81eELnaRbLAcAiCz8cSmB5JgVZqF62fQ8******6FQvFYfyOANc42Wxglhw6FVhw4R58HvelOfFY8hPpfTbB7prKiJLILojD1nh9kFbOkR
# Bind REST API to the specified TCP address using HTTPS protocol. By default
# Scylla Manager Agent uses Scylla listen/broadcast address that is read from
# the Scylla API (see scylla section).
#https: 0.0.0.0:10001

# Use custom port instead of default 10001.
#https_port:

# Version of TLS protocol and cipher suites to use with HTTPS.
# The supported versions are: TLSv1.3, TLSv1.2, TLSv1.0.
# The TLSv1.2 is restricted to the following cipher suites:
# ECDHE-ECDSA-WITH-AES-128-GCM-SHA256, ECDHE-RSA-WITH-AES-128-GCM-SHA256,
# ECDHE-ECDSA-WITH-AES-256-GCM-SHA384, ECDHE-RSA-WITH-AES-256-GCM-SHA384,
# ECDHE-ECDSA-WITH-CHACHA20-POLY1305, ECDHE-RSA-WITH-CHACHA20-POLY1305.
# The TLSv1.0 should never be used as it's deprecated.
#tls_mode: TLSv1.2

# TLS certificate and key files to use with HTTPS. The files must contain PEM
# encoded data. If not set a self-signed certificate will be generated,
# the certificate is valid for 1 year after start and uses EC P256 private key.
#tls_cert_file:
#tls_key_file:

# Bind prometheus API to the specified TCP address using HTTP protocol.
# By default it binds to all network interfaces but you can restrict it
# by specifying it like this 127:0.0.1:5090 or any other combination
# of ip and port.
#prometheus: ':5090'

# Debug server that allows to run pporf profiling on demand on a live system.
debug: 127.0.0.1:5114

# CPU to run Scylla Manager Agent on. By default the agent would read Scylla
# configuration at /etc/scylla.d/cpuset.conf and try to find a core not used by
# Scylla. If that's not possible user can specify a core to run agent on.
#cpu: 0

# Logging configuration.
#logger:
# Where to print logs, stderr or stdout.
#  mode: stderr
# How much logs to print, available levels are: error, info, debug.
#  level: info
# Sampling reduces number of logs by not printing duplicated log entries within
# a second. The first N (initial) entries with a given level and message are
# printed. If more entries with the same level and message are seen during
# the same interval, every Mth (thereafter) message is logged and the rest is
# dropped.
#  sampling:
#    initial: 1
#    thereafter: 100

# Copy api_address and api_port values from /etc/scylla/scylla.yaml. All the
# needed Scylla configuration options are read from the API.
#scylla:
#  api_address: 0.0.0.0
#  api_port: 10000

# Backup general configuration.
#rclone:
# The number of checkers to run in parallel. Checkers do the equality checking
# of files (local vs. backup location) at the beginning of backup.
#  checkers: 100
#
# The number of file transfers to run in parallel. It can sometimes be useful
# to set this to a smaller number if the remote is giving a lot of timeouts or
# bigger if you have lots of bandwidth and a fast remote.
#  transfers: 2
#
# Number of low level retries to do. This applies to operations like file chunk upload.
#  low_level_retries: 20

# Backup S3 configuration.
#
# Note that when running in AWS Scylla Manger Agent can read hosts IAM role.
# It's recommended to define access rules based on IAM roles.
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
#
# To test bucket accessibility use `scylla-manager-agent check-location` command.
# Example:
# scylla-manager-agent check-location --location s3:scylla-manager-backup
#
# Sample IAM policy for "scylla-manager-backup" bucket:
#
# {
#      "Version": "2012-10-17",
#      "Statement": [
#          {
#              "Effect": "Allow",
#              "Action": [
#                  "s3:GetBucketLocation",
#                  "s3:ListBucket",
#                  "s3:ListBucketMultipartUploads"
#              ],
#              "Resource": [
#                  "arn:aws:s3:::scylla-manager-backup"
#              ]
#          },
#          {
#              "Effect": "Allow",
#              "Action": [
#                  "s3:PutObject",
#                  "s3:GetObject",
#                  "s3:DeleteObject",
#                  "s3:AbortMultipartUpload",
#                  "s3:ListMultipartUploadParts"
#              ],
#              "Resource": [
#                  "arn:aws:s3:::scylla-manager-backup/*"
#              ]
#          }
#      ]
#  }
#
#s3:
# S3 credentials, it's recommended to use IAM roles if possible, otherwise set
# your AWS Access Key ID and AWS Secret Access Key (password) here.
#  access_key_id: UJW5P5MV1SDX4808YO49
#  secret_access_key: +r9bhLfgPB5MO1jvuwyQiL/mr8TgTMJZDY1wZyXi

# Provider of the S3 service. By default this is AWS. There are multiple S3
# API compatible providers that can be used instead. Due to minor differences
# between them we require that exact provider is specified here for full
# compatibility. Supported and tested options are: AWS and Minio.
# The available providers are: Alibaba, AWS, Ceph, DigitalOcean, IBMCOS, Minio, 
# Wasabi, Dreamhost, Netease.
#  provider: AWS
#
# Region to connect to, if running in AWS EC2 instance region is set
# to the local region by default.
#  region:
#
# Endpoint for S3 API, only relevant when using S3 compatible API.
#  endpoint: http://172.31.3.120:8082
#
# The server-side encryption algorithm used when storing this object in S3.
# If using KMS ID you must provide the ARN of Key.
#  server_side_encryption:
#  sse_kms_key_id:
#
# The storage class to use when storing new objects in S3.
#  storage_class:
#
# Concurrency for multipart uploads.
#  upload_concurrency: 2
#
# AWS S3 Transfer acceleration
# https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html
#  use_accelerate_endpoint: false

# Backup GCS configuration.
#
# Note that when running in GCP Scylla Manger Agent can use instance
# Service Account. It's recommended to define access rules based on IAM roles
# attached to Service Account.
# https://cloud.google.com/docs/authentication/production
#
# To test bucket accessibility use `scylla-manager-agent check-location` command.
# Example:
# scylla-manager-agent check-location --location gcs:scylla-manager-backup
#
#gcs:
# GCS credentials, it's recommended to use Service Account authentication
# if possible, otherwise set path to credentials here.
#  service_account_file: /etc/scylla-manager-agent/gcs-service-account.json
#
# The storage class to use when storing new objects in Google Cloud Storage.
#  storage_class:

# Backup Microsoft Azure blob storage configuration.
#
# Access can be provided with account/key pair but it is recommended to use
# Azure RBAC to the backup storage with IAM policy defined.
# More about role based access https://docs.microsoft.com/en-us/azure/role-based-access-control/.
#
# Sample role JSON definition scoped to the *ScyllaManagerBackup* resource group:
#
# {
#   "properties": {
#     "roleName": "Scylla Backup Storage Contributor",
#     "description": "Contributor access to the blob service for Scylla cluster backups",
#     "assignableScopes": [
#       "/subscriptions/<subscription_uuid>/resourceGroups/ScyllaManagerBackup"
#     ],
#     "permissions": [
#       {
#         "actions": [
#           "Microsoft.Storage/storageAccounts/blobServices/containers/delete",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/read",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/write",
#           "Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action"
#         ],
#         "notActions": [],
#         "dataActions": [
#           "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action",
#           "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
#         ],
#         "notDataActions": []
#       }


#     ]
#   }
# }
#
azure:
  # Storage account name.
  account: commandcentre
  # Storage account authentication key.
  key: Ni5YLtCf2QgFQ9tsLcjV3V/WDqMiV9IShnvF9PBworbqHYSToQZM8K2h9aAycleL**************kg==



@Karol_Kokoszka Hello
wanted to know if you were able to take a look at the yaml file I uploaded

Thanks @Karol_Kokoszka

Hi @Karol_Kokoszka , Also having this same issue.
Deployed scylladb, scylla-operator and scylla-managr with helm on Kubernetes and while trying to configure backup to AWS S3 compatible storage, we get the Error: get backup target: location is not accessible.
access_key, secet, endpoint in scylla-manager-agent.yaml are correctly set on the pod. From each node the scylla-manager-agent check-location --debug --location s3:command also works fine:

But when we try and test backup from scyllamanger node it fails with the error:
Error: get backup target: location is not accessible

On the side, I’ve also tried using scylla nodetool, while this works when executed inside the scylla pod, when itry to run the command
nodetool -h "$SCYLLADB_HOST" -p 7199 snapshot $keyspace from the backup pod in my cluster (In the same namespace), I get the connection refused error.
I’ve made the necessary authentication correction to the cassandra-env.sh file and the scylla-jmx file, But i still get the error nodetool: Failed to connect to '[hostname.domain.com:7199]- ConnectException: 'Connection refused (Connection refused)'.

Kindly, help with a clear backup documentation for users of scyllaDB on kubernetes.

@a3ts can you add --debug=true flag to the check-location command you execute ? It should give bit more meaningful information.

I encountered the same issue and found the reason:
In the documentation at Setup S3 compatible storage | ScyllaDB Docs, step 6 should be sudo systemctl restart scylla-manager-agent, not start.

3 Likes