hashicorp / packer-plugin-ansible

Packer plugin for Ansible Provisioner
https://www.packer.io/docs/provisioners/ansible
Mozilla Public License 2.0
50 stars 36 forks source link

Packer 1.12.0-alpha1 testing #199

Open lbajolet-hashicorp opened 2 months ago

lbajolet-hashicorp commented 2 months ago

Hi, your friend Packer maintainer here!

I am opening this issue as a follow-up to the github.com/zclconf/go-cty breaking change issue we opened on Packer Plugin SDK last year. With this upcoming release, we are starting to tackle phase 2 of this issue, with some changes compared to the original plan.

We are not going to migrate to go-plugin now, and instead introduced a way for Packer to toggle between the use of Gob or Protobuf/msgpack for all its over-the-wire communication. This change does not apply to communicators, which will continue to use gob for over-the-wire communication.

With the alpha we released last week, Packer now behaves like the following:

  1. During plugin discovery, it will look for a new "protocol_version":"v2" attribute in the returned describe output for a plugin binary.
  2. If the "protocol_version":"v2" attribute is not present, Packer will fallback into compatibility mode and use Gob for communication between Packer and plugins.
  3. Assuming all the plugins discovered support "protocol_version":"v2", Protobuf/msgpack will be used for communication between Packer and plugins.

This behavior can be disabled with a PACKER_FORCE_GOB environment variable, allowing for an escape hatch if a build is blocked because of a bug in that new code that handles protobuf/msgpack serialization.

What it means for plugins

This change should be as transparent as possible for you as plugin developers, and for our users. There will however be one change to make in the plugin's code, namely updating the SDK to the upcoming 0.6.0 release. The release date of 0.6.0 has not yet been determined.

Right now, the code that manages this protobuf/msgpack logic is stored in a grpc_base branch. Apologies for the name, it is not representative of the objective anymore.

Before we can consider releasing this version of Packer, and a version of the SDK, we'd love to get help from you.

Our requests

As we move to support both protocols, we need you to perform a couple tests with your plugin to ensure we maintain compatibility for your supported configurations.

We've prepared a series of steps that we'd like you to run on your code, alongside Packer 1.12.0-alpha1, which you can get from our releases page.

Once you have Packer 1.12.0-alpha1 setup, we'd ask that try the following scenarios on templates of your choice.

Ideally, these tests should be run in a plugin directory that is not your normal one, to avoid Packer discovering plugins that force it to go into compatibility (i.e. Gob) mode.

Scenario 1 - Packer 1.12.0-alpha1 + plugin latest released version

In this scenario, you should expect Packer to use Gob for serialization, as the latest plugin is not (by our assumption) compatible with protobuf/msgpack yet. This is the baseline test, to ensure Packer 1.12.0 still works with your plugin.

If you do not have one, this will also help you get a base working template that relies solely on your plugin and, maybe, Packer's embedded components.

Note: this point is crucial as none of the existing plugin releases support both serialization formats, and so will mandate Packer to run in compatibility mode.

Scenario 2 - Packer 1.12.0-alpha1 + plugin custom build with protobuf/msgpack support

This scenario will require you to bump your dependency on the packer-plugin-sdk to use grpc_base instead of a current release.

You can do this with the following commands for example:

$ go get -v github.com/hashicorp/packer-plugin-sdk@grpc_base && go mod tidy

When this is done, you can compile the plugin, and install it in your test plugin directory.

$ PACKER_PLUGIN_PATH=<test-plugin directory> packer plugins install --path compiled-plugin-binary github.com/org/name

This will need to be the highest compatible version with your plugin in order for Packer to prioritize it, and use protobuf/msgpack for communication.

In order to know for sure if Packer used protobuf for communication, you can take a peek at the verbose logs, where you should encounter a log that points to this.

$ PACKER_PLUGIN_PATH=<test-plugin directory> PACKER_LOG=1 packer build <template> 2>&1 | grep 'Using protobuf for communication with plugins'

This sample command should highlight that Packer uses the expected protocol for communicating. If the build uses Gob, this means that the logic failed, and we may need to do some troubleshooting to understand what happened. In this case please let us know by responding to this issue, and we'll be in contact to sort this out.

Scenario 3 - Packer 1.12.0-alpha1 + plugin custom build with protobuf/msgpack support, with PACKER_FORCE_GOB=1.

As a follow-up to scenario 2, we want to ensure that the code handling the logic for switching protocols works if a fallback is requested by a user.

The overall process is similar, the only difference you can expect is that the grep returns nothing instead of the line in the logs that reports protobuf being used.

$ PACKER_PLUGIN_PATH=<test-plugin directory> PACKER_FORCE_GOB=1 PACKER_LOG=1 packer build <template> 2>&1 | grep 'Using protobuf for communication with plugins'

Scenario 4 - Packer 1.11.2 (latest release) with protobuf compatible plugin

This one is more of a sanity test that ensures your plugin remains compatible with older versions of Packer. If you haven't already please download the latest official Packer 1.11.2 release from the releases page. We expect this one to succeed at all times, but we'd like to be as sure as we can before we release 😃

Conclusion

This is overall a small update to Packer core, and hopefully a small enough change that it will be easy to roll into your codebase, which paves the way for us to later remove our dependency on @nywilken's go-cty fork down the road.

We are aiming to release this in the coming months, and we're hoping we can squash as many bugs as we can before then so this doesn't impact real-life user workflows.

Thank you for your continued support!