Skip to content
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@

### 3.0 Cleanup

* [CHANGE] **BREAKING CHANGE** Remove span-metrics leftovers and lazy-init generator clients [#6618](https://github.com/grafana/tempo/pull/6618) (@javiermolinar)
* [CHANGE] **BREAKING CHANGE** Decommission livestore MetricsGenerator query service [#6615](https://github.com/grafana/tempo/pull/6615) (@javiermolinar)
* [CHANGE] **BREAKING CHANGE** Remove metrics-generator localblocks processor and related local block storage plumbing. [#6555](https://github.com/grafana/tempo/pull/6555) (@javiermolinar)
* [CHANGE] **BREAKING CHANGE** Remove ingesters [#6504](https://github.com/grafana/tempo/pull/6504) (@javiermolinar)
Expand Down
3 changes: 1 addition & 2 deletions cmd/tempo/app/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -356,8 +356,7 @@ func (t *App) readyHandler(sm *services.Manager, shutdownRequested *atomic.Bool)
}
}

// Generator has a special check that makes sure that it was able to register into the ring,
// and that all other ring entries are OK too.
// Generator has a dedicated readiness check for generator-specific dependencies.
if t.generator != nil {
if err := t.generator.CheckReady(r.Context()); err != nil {
http.Error(w, "Generator not ready: "+err.Error(), http.StatusServiceUnavailable)
Expand Down
1 change: 1 addition & 0 deletions cmd/tempo/app/app_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ func TestApp_RunStop(t *testing.T) {
}()

config := NewDefaultConfig()
config.Target = BackendScheduler
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this change?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now setting up Kafka is mandatory. If we dont set it and we dont set target we get a validation error:

level=info msg="server listening on addresses" http=[::]:60047 grpc=[::]:60048
--- FAIL: TestApp_RunStop (30.00s)
    app_test.go:44: Checking Tempo is up...
    app_test.go:38:
                Error Trace:    /Users/javimolina/grafana/tempo.fork/cmd/tempo/app/app_test.go:38
                                                        /opt/homebrew/Cellar/go/1.26.0/libexec/src/runtime/asm_arm64.s:1447
                Error:          Received unexpected error:
                                failed to init module services: error initialising module: distributor: failed to create distributor: the Kafka topic has not been configured
                Test:           TestApp_RunStop
    app_test.go:43:
                Error Trace:    /Users/javimolina/grafana/tempo.fork/cmd/tempo/app/app_test.go:43
                Error:          Condition never satisfied
                Test:           TestApp_RunStop
FAIL
exit status 1
FAIL    github.com/grafana/tempo/cmd/tempo/app  30.994s

Until we make the whole target --all kafkaless we need to use a non kafka module. It can be the backendscheduler or any other one

config.Server.HTTPListenPort = util.MustGetFreePort()
config.Server.GRPCListenPort = util.MustGetFreePort() // not used in the test; set to ensure conflict-free start
config.StorageConfig.Trace.Backend = backend.Local
Expand Down
47 changes: 22 additions & 25 deletions cmd/tempo/app/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ import (
"github.com/grafana/tempo/modules/distributor"
"github.com/grafana/tempo/modules/frontend"
"github.com/grafana/tempo/modules/generator"
generator_client "github.com/grafana/tempo/modules/generator/client"
"github.com/grafana/tempo/modules/ingester"
ingester_client "github.com/grafana/tempo/modules/ingester/client"
"github.com/grafana/tempo/modules/livestore"
Expand Down Expand Up @@ -51,28 +50,28 @@ type Config struct {
EnableGoRuntimeMetrics bool `yaml:"enable_go_runtime_metrics,omitempty"`
PartitionRingLiveStore bool `yaml:"partition_ring_live_store,omitempty"` // todo: remove after rhythm migration

Memory MemoryConfig `yaml:"memory,omitempty"`
Server server.Config `yaml:"server,omitempty"`
InternalServer internalserver.Config `yaml:"internal_server,omitempty"`
Distributor distributor.Config `yaml:"distributor,omitempty"`
IngesterClient ingester_client.Config `yaml:"ingester_client,omitempty"`
GeneratorClient generator_client.Config `yaml:"metrics_generator_client,omitempty"`
LiveStoreClient livestore_client.Config `yaml:"live_store_client,omitempty"`
Querier querier.Config `yaml:"querier,omitempty"`
Frontend frontend.Config `yaml:"query_frontend,omitempty"`
Ingester ingester.Config `yaml:"ingester,omitempty"`
Generator generator.Config `yaml:"metrics_generator,omitempty"`
Ingest ingest.Config `yaml:"ingest,omitempty"`
BlockBuilder blockbuilder.Config `yaml:"block_builder,omitempty"`
StorageConfig storage.Config `yaml:"storage,omitempty"`
Overrides overrides.Config `yaml:"overrides,omitempty"`
MemberlistKV memberlist.KVConfig `yaml:"memberlist,omitempty"`
UsageReport usagestats.Config `yaml:"usage_report,omitempty"`
CacheProvider cache.Config `yaml:"cache,omitempty"`
BackendScheduler backendscheduler.Config `yaml:"backend_scheduler,omitempty"`
BackenSchedulerClient backendscheduler_client.Config `yaml:"backend_scheduler_client,omitempty"`
BackendWorker backendworker.Config `yaml:"backend_worker,omitempty"`
LiveStore livestore.Config `yaml:"live_store,omitempty"`
Memory MemoryConfig `yaml:"memory,omitempty"`
Server server.Config `yaml:"server,omitempty"`
InternalServer internalserver.Config `yaml:"internal_server,omitempty"`
Distributor distributor.Config `yaml:"distributor,omitempty"`
IngesterClient ingester_client.Config `yaml:"ingester_client,omitempty"`
MetricsGeneratorClient map[string]any `yaml:"metrics_generator_client,omitempty"` // Deprecated: kept for one-release config compatibility.
LiveStoreClient livestore_client.Config `yaml:"live_store_client,omitempty"`
Querier querier.Config `yaml:"querier,omitempty"`
Frontend frontend.Config `yaml:"query_frontend,omitempty"`
Ingester ingester.Config `yaml:"ingester,omitempty"`
Generator generator.Config `yaml:"metrics_generator,omitempty"`
Ingest ingest.Config `yaml:"ingest,omitempty"`
BlockBuilder blockbuilder.Config `yaml:"block_builder,omitempty"`
StorageConfig storage.Config `yaml:"storage,omitempty"`
Overrides overrides.Config `yaml:"overrides,omitempty"`
MemberlistKV memberlist.KVConfig `yaml:"memberlist,omitempty"`
UsageReport usagestats.Config `yaml:"usage_report,omitempty"`
CacheProvider cache.Config `yaml:"cache,omitempty"`
BackendScheduler backendscheduler.Config `yaml:"backend_scheduler,omitempty"`
BackenSchedulerClient backendscheduler_client.Config `yaml:"backend_scheduler_client,omitempty"`
BackendWorker backendworker.Config `yaml:"backend_worker,omitempty"`
LiveStore livestore.Config `yaml:"live_store,omitempty"`
}

func NewDefaultConfig() *Config {
Expand Down Expand Up @@ -146,8 +145,6 @@ func (c *Config) RegisterFlagsAndApplyDefaults(prefix string, f *flag.FlagSet) {
c.LiveStoreClient.GRPCClientConfig.GRPCCompression = defaultGRPCCompression
flagext.DefaultValues(&c.IngesterClient)
c.IngesterClient.GRPCClientConfig.GRPCCompression = defaultGRPCCompression
flagext.DefaultValues(&c.GeneratorClient)
c.GeneratorClient.GRPCClientConfig.GRPCCompression = defaultGRPCCompression
flagext.DefaultValues(&c.BackenSchedulerClient)
c.BackenSchedulerClient.GRPCClientConfig.GRPCCompression = defaultGRPCCompression
c.Overrides.RegisterFlagsAndApplyDefaults(f)
Expand Down
60 changes: 18 additions & 42 deletions cmd/tempo/app/modules.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,6 @@
CacheProvider string = "cache-provider"

// rings
MetricsGeneratorRing string = "metrics-generator-ring"
LiveStoreRing string = "live-store-ring"
PartitionRing string = "partition-ring"
GeneratorRingWatcher string = "generator-ring-watcher"
Expand All @@ -84,8 +83,7 @@
SingleBinary string = "all"

// ring names
ringMetricsGenerator string = "metrics-generator"
ringLiveStore string = "live-store"
ringLiveStore string = "live-store"
)

func IsSingleBinary(target string) bool {
Expand Down Expand Up @@ -154,10 +152,6 @@
return s, nil
}

func (t *App) initGeneratorRing() (services.Service, error) {
return t.initReadRing(t.cfg.Generator.Ring.ToRingConfig(), ringMetricsGenerator, t.cfg.Generator.OverrideRingKey)
}

func (t *App) initLiveStoreRing() (services.Service, error) {
return t.initReadRing(t.cfg.LiveStore.Ring.ToRingConfig(), ringLiveStore, ringLiveStore)
}
Expand Down Expand Up @@ -248,16 +242,26 @@
}

func (t *App) initDistributor() (services.Service, error) {
singleBinary := IsSingleBinary(t.cfg.Target)

Check notice on line 246 in cmd/tempo/app/modules.go

View workflow job for this annotation

GitHub Actions / Coverage Annotations

Uncovered lines

Lines 245-246 are not covered by tests
t.cfg.Distributor.KafkaConfig = t.cfg.Ingest.Kafka
t.cfg.Distributor.IngesterWritePathEnabled = false
t.cfg.Distributor.KafkaWritePathEnabled = t.cfg.Ingest.Enabled // TODO: Don't mix config params
t.cfg.Distributor.PushSpansToKafka = true
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Gate Kafka write-path enablement on actual ingest configuration

initDistributor now forces PushSpansToKafka = true for every target, which makes distributor.New() always validate Kafka settings and fail when ingest.kafka.topic is unset. With the current defaults (ingest.enabled: false, empty topic), this turns the default all target into a startup error path (ErrMissingKafkaTopic) even for configs that previously booted without Kafka; the test change to BackendScheduler masks this regression rather than fixing it. Please only enable Kafka routing when the deployment/config is actually Kafka-backed (or provide a non-empty default topic).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kafka is the only deployment model now unless we are in singlebinary mode. So this is totally right


var pushSpansToLocalGenerator distributor.PushSpansFunc
if singleBinary {
pushSpansToLocalGenerator = func(ctx context.Context, req *tempopb.PushSpansRequest) (*tempopb.PushResponse, error) {
if t.generator == nil {
return nil, errors.New("metrics-generator not initialized")
}
return t.generator.PushSpans(ctx, req)

Check notice on line 256 in cmd/tempo/app/modules.go

View workflow job for this annotation

GitHub Actions / Coverage Annotations

Uncovered lines

Lines 248-256 are not covered by tests
}
}
Comment on lines +250 to +258
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about more kafkaless changes, it'd be nice to instead have a call in the distributor .AddDirectPush() that appends these types of calls, because we're going to need it for the live-store as well. That way it also doesn't change the happy path.

I'm fine if you prefer it this way.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind if we do it in a different pr? this is wired with the middleware and the shim and it involves some other changes


// todo: make write-path client a module instead of passing the config everywhere
distributor, err := distributor.New(t.cfg.Distributor,
t.cfg.IngesterClient,
t.readRings[ringLiveStore],
t.cfg.GeneratorClient,
t.readRings[ringMetricsGenerator],
pushSpansToLocalGenerator,

Check notice on line 264 in cmd/tempo/app/modules.go

View workflow job for this annotation

GitHub Actions / Coverage Annotations

Uncovered line

Line 264 is not covered by tests
t.partitionRing,
t.Overrides,
t.TracesConsumerMiddleware,
Expand All @@ -279,7 +283,7 @@
}

func (t *App) initGenerator() (services.Service, error) {
t.cfg.Generator.Ring.ListenPort = t.cfg.Server.GRPCListenPort
t.cfg.Generator.ConsumeFromKafka = !IsSingleBinary(t.cfg.Target)

Check notice on line 286 in cmd/tempo/app/modules.go

View workflow job for this annotation

GitHub Actions / Coverage Annotations

Uncovered line

Line 286 is not covered by tests

t.cfg.Generator.Ingest = t.cfg.Ingest
t.cfg.Generator.Ingest.Kafka.ConsumerGroup = generator.ConsumerGroup
Expand All @@ -294,32 +298,14 @@
}
t.generator = genSvc

spanStatsHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.generator.SpanMetricsHandler))
t.Server.HTTPRouter().Handle(path.Join(api.PathPrefixGenerator, addHTTPAPIPrefix(&t.cfg, api.PathSpanMetrics)), spanStatsHandler)

queryRangeHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.generator.QueryRangeHandler))
t.Server.HTTPRouter().Handle(path.Join(api.PathPrefixGenerator, addHTTPAPIPrefix(&t.cfg, api.PathMetricsQueryRange)), queryRangeHandler)

if !IsSingleBinary(t.cfg.Target) {
tempopb.RegisterMetricsGeneratorServer(t.Server.GRPC(), t.generator) // todo: this can be removed before 3.0 but needs to exist as long as we have any deployments anywhere on the traditional arch
}

return t.generator, nil
}

func (t *App) initGeneratorNoLocalBlocks() (services.Service, error) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe there is now no functional difference between the two can could be consolidated.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is something I want to tackle on a different PR. We need first to allow the initGenerator to use a different ring and differect codecs for at least one release since this target (initGeneratorNoLocalBlocks) is already in use

reg := prometheus.DefaultRegisterer

t.cfg.Generator.Ingest = t.cfg.Ingest

// In this mode, the generator runs as a stateless queue consumer that reads from
// Kafka and remote writes to a Prometheus-compatible metrics store.
if !t.cfg.Ingest.Enabled {
return nil, errors.New("ingest storage must be enabled to run metrics generator in this mode")
}
// In this mode, the generator does not need to become available to serve
// queries, so we can skip setting up a gRPC server.
t.cfg.Generator.DisableGRPC = true
t.cfg.Generator.ConsumeFromKafka = true

Check notice on line 308 in cmd/tempo/app/modules.go

View workflow job for this annotation

GitHub Actions / Coverage Annotations

Uncovered line

Line 308 is not covered by tests

var err error
t.generator, err = generator.New(&t.cfg.Generator, t.Overrides, reg, t.generatorRingWatcher, log.Logger)
Expand Down Expand Up @@ -356,10 +342,6 @@
}

func (t *App) initBlockBuilder() (services.Service, error) {
if !t.cfg.Ingest.Enabled {
return services.NewIdleService(nil, nil), nil
}

t.cfg.BlockBuilder.IngestStorageConfig = t.cfg.Ingest
t.cfg.BlockBuilder.IngestStorageConfig.Kafka.ConsumerGroup = blockbuilder.ConsumerGroup
t.cfg.BlockBuilder.GlobalBlockConfig = t.cfg.StorageConfig.Trace.Block
Expand Down Expand Up @@ -675,10 +657,6 @@
}

func (t *App) initLiveStore() (services.Service, error) {
if !t.cfg.Ingest.Enabled {
return services.NewIdleService(nil, nil), nil
}

// In SingleBinary mode don't try to discover partition from host name.
// Always use partition 0. This is for small installs or local/debugging setups.
singlePartition := IsSingleBinary(t.cfg.Target)
Expand Down Expand Up @@ -721,7 +699,6 @@
mm.RegisterModule(OverridesAPI, t.initOverridesAPI)
mm.RegisterModule(UsageReport, t.initUsageReport)
mm.RegisterModule(CacheProvider, t.initCacheProvider, modules.UserInvisibleModule)
mm.RegisterModule(MetricsGeneratorRing, t.initGeneratorRing, modules.UserInvisibleModule)
mm.RegisterModule(GeneratorRingWatcher, t.initGeneratorRingWatcher, modules.UserInvisibleModule)
mm.RegisterModule(LiveStoreRing, t.initLiveStoreRing, modules.UserInvisibleModule)
mm.RegisterModule(PartitionRing, t.initPartitionRing, modules.UserInvisibleModule)
Expand Down Expand Up @@ -749,7 +726,6 @@
OverridesAPI: {Server, Overrides},
MemberlistKV: {Server},
UsageReport: {MemberlistKV},
MetricsGeneratorRing: {Server, MemberlistKV},
LiveStoreRing: {Server, MemberlistKV},
PartitionRing: {MemberlistKV, Server, LiveStoreRing},
GeneratorRingWatcher: {MemberlistKV},
Expand All @@ -758,7 +734,7 @@

// individual targets
QueryFrontend: {Common, Store, OverridesAPI},
Distributor: {Common, LiveStoreRing, MetricsGeneratorRing, PartitionRing},
Distributor: {Common, LiveStoreRing, PartitionRing},
MetricsGenerator: {Common, MemberlistKV, PartitionRing},
MetricsGeneratorNoLocalBlocks: {Common, GeneratorRingWatcher},
Querier: {Common, Store, LiveStoreRing, PartitionRing},
Expand Down
5 changes: 1 addition & 4 deletions docs/sources/tempo/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ Benchmark testing suggested that without compression, queriers and distributors
However, you may notice an increase in ingester data and network traffic especially for larger clusters.
This increased data can impact billing for Grafana Cloud.

You can configure the gRPC compression in the `querier`, `ingester`, and `metrics_generator` clients of the distributor.
You can configure the gRPC compression in the `ingester_client` and `querier.frontend_worker` gRPC clients.

To disable compression, remove `snappy` from the `grpc_compression` lines.

Expand All @@ -325,9 +325,6 @@ To re-enable the compression, use `snappy` with the following settings:
ingester_client:
grpc_client_config:
grpc_compression: "snappy"
metrics_generator_client:
grpc_client_config:
grpc_compression: "snappy"
querier:
frontend_worker:
grpc_client_config:
Expand Down
34 changes: 0 additions & 34 deletions docs/sources/tempo/configuration/manifest.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,39 +272,6 @@ ingester_client:
connect_backoff_max_delay: 5s
cluster_validation:
label: ""
metrics_generator_client:
pool_config:
checkinterval: 15s
healthcheckenabled: true
healthchecktimeout: 1s
maxconcurrenthealthchecks: 0
remote_timeout: 5s
grpc_client_config:
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
grpc_compression: snappy
rate_limit: 0
rate_limit_burst: 0
backoff_on_ratelimits: false
backoff_config:
min_period: 100ms
max_period: 10s
max_retries: 10
initial_stream_window_size: 63KiB1023B
initial_connection_window_size: 63KiB1023B
tls_enabled: false
tls_cert_path: ""
tls_key_path: ""
tls_ca_path: ""
tls_server_name: ""
tls_insecure_skip_verify: false
tls_cipher_suites: ""
tls_min_version: ""
connect_timeout: 5s
connect_backoff_base_delay: 1s
connect_backoff_max_delay: 5s
cluster_validation:
label: ""
live_store_client:
pool_config:
checkinterval: 15s
Expand Down Expand Up @@ -650,7 +617,6 @@ metrics_generator:
remote_write_flush_deadline: 1m0s
remote_write_add_org_id_header: true
metrics_ingestion_time_range_slack: 30s
query_timeout: 30s
override_ring_key: metrics-generator
codec: push-bytes
disable_grpc: false
Expand Down
10 changes: 1 addition & 9 deletions docs/sources/tempo/configuration/network/tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,7 @@ grpc_client_config:
The configuration block needs to be set at the following configuration locations.

- `ingester_client.grpc_client_config`
- `metrics_generator_client.grpc_client_config`
- `querier.query-frontend.grpc_client_config`
- `querier.frontend_worker.grpc_client_config`

Additionally, `memberlist` must also be configured, but the client configuration is nested directly under `memberlist` as follows. The same configuration options are available as above.

Expand Down Expand Up @@ -209,13 +208,6 @@ tempo:
- parquet-footer
- bloom
- frontend-search
metrics_generator_client:
grpc_client_config:
tls_ca_path: /tls/ca.crt
tls_cert_path: /tls/tls.crt
tls_enabled: true
tls_key_path: /tls/tls.key
tls_server_name: tempo-distributed.trace.svc.cluster.local
querier:
frontend_worker:
grpc_client_config:
Expand Down
18 changes: 18 additions & 0 deletions integration/metrics-generator/metrics_generator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,24 @@ const (
configMetricsGeneratorMessagingSystem = "config-messaging-system.yaml"
)

func TestMetricsGeneratorSingleBinaryPushesInProcess(t *testing.T) {
util.RunIntegrationTests(t, util.TestHarnessConfig{
ConfigOverlay: configMetricsGenerator,
DeploymentMode: util.DeploymentModeSingleBinary,
Components: util.ComponentsMetricsGeneration,
}, func(h *util.TempoHarness) {
h.WaitTracesWritable(t)

require.NoError(t, h.WriteJaegerBatch(util.MakeThriftBatch(), ""))

tempo := h.Services[util.ServiceMetricsGenerator]
// In single-binary mode the generator receives spans in-process and should
// not consume from Kafka.
require.NoError(t, tempo.WaitSumMetrics(e2e.Equals(float64(0)), "tempo_metrics_generator_enqueue_time_seconds_total"))
require.NoError(t, tempo.WaitSumMetrics(e2e.GreaterOrEqual(1), "tempo_metrics_generator_spans_received_total"))
})
}

func TestMetricsGeneratorRemoteWrite(t *testing.T) {
util.RunIntegrationTests(t, util.TestHarnessConfig{
ConfigOverlay: configMetricsGenerator,
Expand Down
14 changes: 9 additions & 5 deletions modules/distributor/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,14 @@ type Config struct {
Forwarders forwarder.ConfigList `yaml:"forwarders"`
Usage usage.Config `yaml:"usage,omitempty"`

// Migration to Kafka write path
IngesterWritePathEnabled bool `yaml:"ingester_write_path_enabled"`
KafkaWritePathEnabled bool `yaml:"kafka_write_path_enabled"`
KafkaConfig ingest.KafkaConfig `yaml:"kafka_config"`
// Deprecated: this field will be removed in a future release. Write path routing is set by deployment model.
IngesterWritePathEnabled bool `yaml:"ingester_write_path_enabled"`
// Deprecated: this field will be removed in a future release. Write path routing is set by deployment model.
KafkaWritePathEnabled bool `yaml:"kafka_write_path_enabled"`
KafkaConfig ingest.KafkaConfig `yaml:"kafka_config"`

// Internal routing toggle set by app wiring (not user-configurable).
PushSpansToKafka bool `yaml:"-"`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this always true? Why have this param?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a placeholder. When we make the kafkaless mode this will be set to false instructing the distributor to not push to kafka


// disables write extension with inactive ingesters. Use this along with ingester.lifecycler.unregister_on_shutdown = true
// note that setting these two config values reduces tolerance to failures on rollout b/c there is always one guaranteed to be failing replica
Expand Down Expand Up @@ -104,7 +108,7 @@ func (cfg *Config) RegisterFlagsAndApplyDefaults(prefix string, f *flag.FlagSet)
}

func (cfg *Config) Validate() error {
if cfg.KafkaWritePathEnabled {
if cfg.PushSpansToKafka {
if err := cfg.KafkaConfig.Validate(); err != nil {
return err
}
Expand Down
Loading
Loading