When interacting with cookbook archival/retrieval on chef automate cluster you may initially observe partial/complete failure of cookbook uploads:
Uploading workstation [0.2.1]
Uploading base_cookbook [1.0.0]
ERROR: Failed to upload chef/home/chef-repo/cookbooks/a_file/.kitchen.yml (01f2b3e91daaa0f8c0c1ae7a89312e5c) to https://chef-server.cluster.com:443/bookshelf/organization-9cbe11ee1f044279c24e33afe578f94a/checksum-01f2b3e91daaa0f8c0c1ae7a89312e5c?AWSAccessKeyId=6cc1756947263e8a98da0f094b0bb55859b3190c&Expires=1533597750&Signature=PPyLLjPHJ22GqHisSuiH%2BdpfhK8%3D : 403 "Forbidden"
<?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes></StringToSignBytes><RequestId>g2gDZAATYm9va3NoZWxmQDEyNy4wLjAuMWgDYgAABf1iAAka+mIACkV3YgAADMs=</RequestId><HostId></HostId><SignatureProvided>PPyLLjPHJ22GqHisSuiH+dpfhK8=</SignatureProvided><StringToSign>PUT
ERROR: Failed to upload
When Chef Automate Cluster is deployed behind a loadbalancer (or any Application Layer payload manipulation device - squid proxy, AWS WAF, Kemp or Akamai etc.) it is possible for requests originating from Chef workstation to be transformed/augmented upon pass through to the Chef infra Server frontends (FE's). depending on the configuration and technology headers can be manipulated/removed/dropped.
|Chef Automate 2 Cluster||1.0.0+||Cluster|
The first port of call is to understand whether your loadbalancer is passing through all headers to the chef server frontends. We'd reccommend checking with the infrastructure team in case any hardening/changes on the load balancer configuration has impacted the payload of workstation requests.
knife cookbook upload base_cookbook -VV
should verify whether you do indeed see all headers being passed with the requests and the responses from whatever middle box intercepts the requests.
Do you know or are you able to tell if those request headers are being stripped out before they reach the backends? If so, that's definitely going to cause issues like this to occur. If those headers are being passed along untouched, it may just be that the frontend services need a restart (saw that happen in another ticket before).
After checking out the headers theory, could you provide me with the debug (
-VV) logs from a knife upload that is failing with entries containing:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
We have previously observed that ins some instances configuration only partially propagates and we often advise restarting bookshelf on each frontend in turn to attempt to pick up the latest configuration:
chef-server-ctl restart bookshelf
If this does not resolve issues and you have not yet received internal verification that the load balancer is passing through headers we recommend reconfiguring the config.rb file on the knife workstation (see https://docs.chef.io/workstation/config_rb/) to upload a cookbook directly to one of the chef server frontends:
knife cookbook upload base_cookbook -VV
If this works then it is verification that something is amiss in the upstream loadbalancer.
A possible solution (if the issue is specific to a loadbalancer) is to enable sticky sessions if it hasn't already been enabled. This'll ensure that a request from knife workstation should cache one key/id set and use those for the duration of the interaction (it breaks when it has one key/id set but attempts to use these against different FE's recursively in the same call). If the device you are using is not Layer 7 (application) but in fact layer 4 then an interim workaround is to ensure that keys from one FE node are propagated across all FE's. if you can select one FE and copy the file contents located under:
to all other FE nodes followed by:
That should ensure that the key that knife caches will be available for every subsequent call.