Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

151 lines
5.0 KiB

// Copyright (c) HashiCorp, Inc.
[COMPLIANCE] License changes (#18443) * Adding explicit MPL license for sub-package This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository. * Adding explicit MPL license for sub-package This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository. * Updating the license from MPL to Business Source License Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at <Blog URL>, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl. * add missing license headers * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 * Update copyright file headers to BUSL-1.1 --------- Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
1 year ago
// SPDX-License-Identifier: BUSL-1.1
syntax = "proto3";
package hashicorp.consul.internal.peerstream;
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2 years ago
import "annotations/ratelimit/ratelimit.proto";
import "google/protobuf/any.proto";
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2 years ago
import "private/pbpeering/peering.proto";
import "private/pbservice/node.proto";
// TODO(peering): Handle this some other way
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2 years ago
import "private/pbstatus/status.proto";
// TODO(peering): comments
// TODO(peering): also duplicate the pbservice, some pbpeering, and ca stuff.
service PeerStreamService {
// StreamResources opens an event stream for resources to share between peers, such as services.
// Events are streamed as they happen.
// buf:lint:ignore RPC_REQUEST_STANDARD_NAME
// buf:lint:ignore RPC_RESPONSE_STANDARD_NAME
// buf:lint:ignore RPC_REQUEST_RESPONSE_UNIQUE
2 years ago
rpc StreamResources(stream ReplicationMessage) returns (stream ReplicationMessage) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_TYPE_READ,
operation_category: OPERATION_CATEGORY_PEER_STREAM
};
2 years ago
}
// ExchangeSecret is a unary RPC for exchanging the one-time establishment secret
// for a long-lived stream secret.
2 years ago
rpc ExchangeSecret(ExchangeSecretRequest) returns (ExchangeSecretResponse) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_TYPE_WRITE,
operation_category: OPERATION_CATEGORY_PEER_STREAM
};
2 years ago
}
}
message ReplicationMessage {
oneof Payload {
Open open = 1;
Request request = 2;
Response response = 3;
Terminated terminated = 4;
Heartbeat heartbeat = 5;
}
// Open is the initial message send by a dialing peer to establish the peering stream.
message Open {
// An identifier for the peer making the request.
// This identifier is provisioned by the serving peer prior to the request from the dialing peer.
string PeerID = 1;
// StreamSecretID contains the long-lived secret from stream authn/authz.
string StreamSecretID = 2;
// Remote contains metadata about the remote peer.
hashicorp.consul.internal.peering.RemoteInfo Remote = 3;
}
// A Request requests to subscribe to a resource of a given type.
message Request {
// An identifier for the peer making the request.
// This identifier is provisioned by the serving peer prior to the request from the dialing peer.
string PeerID = 1;
// ResponseNonce corresponding to that of the response being ACKed or NACKed.
// Initial subscription requests will have an empty nonce.
// The nonce is generated and incremented by the exporting peer.
// TODO
string ResponseNonce = 2;
// The type URL for the resource being requested or ACK/NACKed.
string ResourceURL = 3;
// The error if the previous response was not applied successfully.
// This field is empty in the first subscription request.
status.Status Error = 5;
}
// A Response contains resources corresponding to a subscription request.
message Response {
// Nonce identifying a response in a stream.
string Nonce = 1;
// The type URL of resource being returned.
string ResourceURL = 2;
// An identifier for the resource being returned.
// This could be the SPIFFE ID of the service.
string ResourceID = 3;
// The resource being returned.
google.protobuf.Any Resource = 4;
// REQUIRED. The operation to be performed in relation to the resource.
Operation operation = 5;
}
// Terminated is sent when a peering is deleted locally.
// This message signals to the peer that they should clean up their local state about the peering.
message Terminated {}
// Heartbeat is sent to verify that the connection is still active.
message Heartbeat {}
}
// Operation enumerates supported operations for replicated resources.
enum Operation {
OPERATION_UNSPECIFIED = 0;
// UPSERT represents a create or update event.
OPERATION_UPSERT = 1;
}
// LeaderAddress is sent when the peering service runs on a consul node
// that is not a leader. The node either lost leadership, or never was a leader.
message LeaderAddress {
// address is an ip:port best effort hint at what could be the cluster leader's address
string address = 1;
}
// ExportedService is one of the types of data returned via peer stream replication.
message ExportedService {
repeated hashicorp.consul.internal.service.CheckServiceNode Nodes = 1;
}
// ExportedServiceList is one of the types of data returned via peer stream replication.
message ExportedServiceList {
// The identifiers for the services being exported.
repeated string Services = 1;
}
message ExchangeSecretRequest {
// PeerID is the ID of the peering, as determined by the cluster that generated the
// peering token.
string PeerID = 1;
// EstablishmentSecret is the one-time-use secret encoded in the received peering token.
string EstablishmentSecret = 2;
}
message ExchangeSecretResponse {
// StreamSecret is the long-lived secret to be used for authentication with the
// peering stream handler.
string StreamSecret = 1;
}