github.com/community-terraform-providers/terraform-provider-ignition

go get github.com/community-terraform-providers/terraform-provider-ignition

40 Dependencies

go
cloud.google.com/go
v0.49.0
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the other client libraries underneath this package. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the Secret Manager client, but the same steps apply to the other client libraries underneath this package. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use contexts. Transient errors will be retried when correctness allows. Here is an example of how to set a timeout for an RPC, use context.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: To opt out of default deadlines, set the temporary environment variable GOOGLE_API_GO_EXPERIMENTAL_DISABLE_DEFAULT_DEADLINE to "true" prior to client creation. This affects all Google Cloud Go client libraries. This opt-out mechanism will be removed in a future release. File an issue at https://github.com/googleapis/google-cloud-go if the default deadlines cannot work for you. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Most of the errors returned by the generated clients can be converted into a `grpc.Status`. Converting your errors to this type can be a useful to get more information about what went wrong while debugging. Clients in this repository are considered alpha or beta unless otherwise marked as stable in the README.md. Semver is not used to communicate stability of clients. Alpha and beta clients may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including: - Security bugs may prompt backwards-incompatible changes. - Situations in which components are no longer feasible to maintain without making breaking changes, including removal. - Parts of the client surface may be outright unstable and subject to change. These parts of the surface will be labeled with the note, "It is EXPERIMENTAL and subject to change or removal without notice."
Last ver 5mos ago
Apache-2.0
big
cloud.google.com/go/bigquery
v1.3.0
Package bigquery provides a client for the BigQuery service. The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client: To query existing tables, create a Query and call its Read method: Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest: You can also use a struct whose exported fields match the query: You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation. Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process. To retrieve the job's results from the ID, first look up the Job: Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read. You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method: These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference: You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist. You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with: We'll see how to create a table with a schema in the next section. There are two ways to construct schemas with this package. You can build a schema by hand, like so: Or you can infer the schema from a struct: Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime, NullDateTime, and NullGeography) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields. Having constructed a schema, you can create a table with it like so: You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job: You can chain the call to Run if you don't want to set options: You can wait for your job to complete: Job.Wait polls with exponential backoff. You can also poll yourself, if you wish: There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program. For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method. To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Inserter, and call its Put method with a slice of values. You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred: BigQuery allows for higher throughput when omitting insertion IDs. To enable this, specify the sentinel `NoDedupeID` value for the insertion ID when implementing a ValueSaver. If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method. Errors returned by this client are often of the type googleapi.Error: https://godoc.org/google.golang.org/api/googleapi#Error These errors can be introspected for more information by using `xerrors.As` with the richer *googleapi.Error type. For example: In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas
Last ver 5mos ago
Apache-2.0
pub
cloud.google.com/go/pubsub
v1.1.0
Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub messages, hiding the details of the underlying server RPCs. Google Cloud Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders and receivers. More information about Google Cloud Pub/Sub is available at https://cloud.google.com/pubsub/docs See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Google Cloud Pub/Sub messages are published to topics. Topics may be created using the pubsub package like so: Messages may then be published to a topic: Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub service. Publish returns a PublishResult, which behaves like a future: its Get method blocks until the message has been sent to the service. The first time you call Publish on a topic, goroutines are started in the background. To clean up these goroutines, call Stop: To receive messages published to a topic, clients create subscriptions to the topic. There may be more than one subscription per topic; each message that is published to the topic will be delivered to all of its subscriptions. Subsciptions may be created like so: Messages are then consumed from a subscription via callback. The callback is invoked concurrently by multiple goroutines, maximizing throughput. To terminate a call to Receive, cancel its context. Once client code has processed the message, it must call Message.Ack or Message.Nack, otherwise the message will eventually be redelivered. Ack/Nack MUST be called within the Receive handler function, and not from a goroutine. Otherwise, flow control (e.g. ReceiveSettings.MaxOutstandingMessages) will not be respected, and messages can get orphaned when cancelling Receive. If the client cannot or doesn't want to process the message, it can call Message.Nack to speed redelivery. For more information and configuration options, see "Deadlines" below. Note: It is possible for Messages to be redelivered, even if Message.Ack has been called. Client code must be robust to multiple deliveries of messages. Note: This uses pubsub's streaming pull feature. This feature properties that may be surprising. Please take a look at https://cloud.google.com/pubsub/docs/pull#streamingpull for more details on how streaming pull behaves compared to the synchronous pull method. The default pubsub deadlines are suitable for most use cases, but may be overridden. This section describes the tradeoffs that should be considered when overriding the defaults. Behind the scenes, each message returned by the Pub/Sub server has an associated lease, known as an "ACK deadline". Unless a message is acknowledged within the ACK deadline, or the client requests that the ACK deadline be extended, the message will become eligible for redelivery. As a convenience, the pubsub client will automatically extend deadlines until either: ACK deadlines are extended periodically by the client. The initial ACK deadline given to messages is 10s. The period between extensions, as well as the length of the extension, automatically adjust depending on the time it takes to ack messages, up to 10m. This has the effect that subscribers that process messages quickly have their message ack deadlines extended for a short amount, whereas subscribers that process message slowly have their message ack deadlines extended for a large amount. The net effect is fewer RPCs sent from the client library. For example, consider a subscriber that takes 3 minutes to process each message. Since the library has already recorded several 3 minute "time to ack"s in a percentile distribution, future message extensions are sent with a value of 3 minutes, every 3 minutes. Suppose the application crashes 5 seconds after the library sends such an extension: the Pub/Sub server would wait the remaining 2m55s before re-sending the messages out to other subscribers. Please note that the client library does not use the subscription's AckDeadline by default. To enforce the subscription AckDeadline, set MaxExtension to the subscription's AckDeadline: For use cases where message processing exceeds 30 minutes, we recommend using the base client in a pull model, since long-lived streams are periodically killed by firewalls. See the example at https://godoc.org/cloud.google.com/go/pubsub/apiv1#example-SubscriberClient-Pull-LengthyClientProcessing To use an emulator with this library, you can set the PUBSUB_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Pub/Sub. You can then create and use a client as usual:
Last ver 5mos ago
Apache-2.0
sto
cloud.google.com/go/storage
v1.4.0
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call Create on the handle: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with Attrs: Listing objects in a bucket is done with the Bucket.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, Query.StartOffset and/or Query.EndOffset can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see https://cloud.google.com/storage/docs/access-control/iam). To list the ACLs of a bucket or object, obtain an ACLHandle and call its List method: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. You don't need to create a client to do this. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see https://cloud.google.com/storage/docs/xml-api/post-object as well as the documentation of GenerateSignedPostPolicyV4. Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: See https://pkg.go.dev/google.golang.org/api/googleapi#Error for more information. Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See https://cloud.google.com/storage/docs/retry-strategy for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example:
Last ver 5mos ago
Apache-2.0
github.com/ajeddeloh/go-json
v0.0.0-20170920214419-6a2fe990e083
Modified version of go's encoding/json library which allows decoding to a Node struct with offset information
Last ver 10mos ago
2 Stars
BSD-3-Clause
github.com/aws/aws-sdk-go
v1.25.47
AWS SDK for the Go programming language.
Last ver 8mos ago
7.4K Stars
Apache-2.0
github.com/coreos/go-semver
v0.2.0
semver library in Go
Last ver 1yr ago
247 Stars
Apache-2.0
github.com/coreos/go-systemd
v0.0.0-20190719114852-fd7a80b32e1f
Go bindings to systemd socket activation, journal, D-Bus, and unit files
Last ver 1yr ago
1.8K Stars
Apache-2.0
github.com/coreos/ignition
v0.34.0
First boot installer and configuration tool
Last ver 1yr ago
613 Stars
Apache-2.0
github.com/golang/groupcache
v0.0.0-20191027212112-611e8accdfc9
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.
Last ver 1yr ago
10.8K Stars
Apache-2.0
github.com/go-test/deep
v1.0.4
Golang deep variable equality test that returns human-readable differences
Last ver 9mos ago
578 Stars
MIT
github.com/hashicorp/go-hclog
v0.10.0
A common logging package for HashiCorp tools
Last ver 8mos ago
202 Stars
MIT
github.com/hashicorp/hcl
v1.0.0
HCL is the HashiCorp configuration language.
Last ver 10mos ago
3.9K Stars
MPL-2.0
github.com/hashicorp/hcl/v2
v2.1.0
HCL is the HashiCorp configuration language.
Last ver 8mos ago
3.9K Stars
MPL-2.0
github.com/hashicorp/terraform-config-inspect
v0.0.0-20191121111010-e9629612a215
A helper library for shallow inspection of Terraform configurations
Last ver 9mos ago
196 Stars
MPL-2.0
github.com/hashicorp/terraform-plugin-sdk
v1.4.0
Terraform Plugin SDK enables building plugins (providers) to manage any service providers or custom in-house solutions
Last ver 1yr ago
270 Stars
MPL-2.0
github.com/hashicorp/terraform-svchost
v0.0.0-20191119180714-d2e4933b9136
Package svchost deals with the representations of the so-called "friendly hostnames" that we use to represent systems that provide Terraform-native remote services, such as module registry, remote operations, etc. Friendly hostnames are specified such that, as much as possible, they are consistent with how web browsers think of hostnames, so that users can bring their intuitions about how hostnames behave when they access a Terraform Enterprise instance's web UI (or indeed any other website) and have this behave in a similar way.
Last ver 8mos ago
1 Stars
MPL-2.0
github.com/hashicorp/yamux
v0.0.0-20190923154419-df201c70410d
Golang connection multiplexing library
Last ver 10mos ago
1.6K Stars
MPL-2.0
gjr
github.com/jstemmer/go-junit-report
v0.9.1
Convert go test output to junit xml
Last ver 1yr ago
535 Stars
MIT
github.com/pkg/errors
v0.8.1
Simple error handling primitives
Last ver 1yr ago
7.4K Stars
BSD-2-Clause
com
github.com/posener/complete
v1.2.3
bash completion written in go + bash completion for go command
Last ver 1yr ago
799 Stars
MIT
xz
github.com/ulikunitz/xz
v0.5.6
Pure golang package for reading and writing xz-compressed files
Last ver 1yr ago
358 Stars
BSD-3-Clause
dat
github.com/vincent-petithory/dataurl
v0.0.0-20160330182126-9a301d65acbb
Data URL Schemes in Golang
Last ver 9mos ago
116 Stars
MIT
msg
github.com/vmihailenco/msgpack
v4.0.4+incompatible
msgpack.org[Go] MessagePack encoding for Golang
Last ver 9mos ago
1.6K Stars
BSD-2-Clause