Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🤗 [Question]: memory usage is increasing exponentially #3291

Closed
3 tasks done
ogulcanarbc opened this issue Jan 24, 2025 · 10 comments
Closed
3 tasks done

🤗 [Question]: memory usage is increasing exponentially #3291

ogulcanarbc opened this issue Jan 24, 2025 · 10 comments

Comments

@ogulcanarbc
Copy link

Question Description

helo, memory consumption increases over time and after two days it reaches the limit values and gets pod oomkilled error, pprof results are as in the image below, what do you think about this issue? I can share detailed information if you request. thanks.

Image

my middlewares:

// HttpGlobalMiddleware returns a slice of fiber.Handler containing global middlewares for the given core.App.
// The middlewares include CorrelationID, Logging, Recovery, compress, pprof, requestId, and Opa.
// If a New Relic application is found in the container, the NewRelic middleware will also be added.
func (r AppRegistry) HttpGlobalMiddleware(app core.App) []fiber.Handler {
	middlewares := []fiber.Handler{
		middleware.CorrelationID(),
		middleware.Logging(app.Config().GetBoolean("logging.debugMode")),
		middleware.Recovery(),
		compress.New(),
		pprof.New(),
		requestid.New(),
		cors.New(cors.Config{
			AllowHeaders: "*",
			AllowOrigins: "*",
			AllowMethods: "GET,POST,HEAD,PUT,DELETE,PATCH,OPTIONS",
		}),
		middleware.Opa(&middleware.Options{
			QueryName:              app.Config().GetString("opa.queryName"),
			PolicyPath:             app.Config().GetString("opa.policyPath"),
			ExcludedEndpoints:      app.Config().GetStringArray("opa.excludedEndpoints"),
			ExcludedActiveProfiles: app.Config().GetStringArray("opa.excludedActiveProfiles"),
			ActiveProfile:          app.Config().GetString("app.env"),
			EncodedJwksSecret:      app.Secret().GetString("jwksSecret"),
		}),
	}

	if newrelicApp := app.Container().MustGet(NewrelicKey).(*newrelic.Application); newrelicApp != nil {
		middlewares = append(middlewares, middleware.NewRelic(newrelicApp))
	}

	return middlewares
}

Code Snippet (optional)

package main

import "github.com/gofiber/fiber/v3"
import "log"

func main() {
  app := fiber.New()

  // An example to describe the question

  log.Fatal(app.Listen(":3000"))
}

Checklist:

  • I agree to follow Fiber's Code of Conduct.
  • I have checked for existing issues that describe my questions prior to opening this one.
  • I understand that improperly formatted questions may be closed without explanation.
Copy link

welcome bot commented Jan 24, 2025

Thanks for opening your first issue here! 🎉 Be sure to follow the issue template! If you need help or want to chat with us, join us on Discord https://gofiber.io/discord

@ReneWerner87
Copy link
Member

@ogulcanarbc thanks for the report
it would be important to find out which variable increases exponentially and causes this

can you please continue your research and help us with the approximate code?
then we can tackle the problem

if anyone else can recreate it, please post it here too

@ReneWerner87
Copy link
Member

Image

Image

what do you use in the one middleware ?

@ReneWerner87
Copy link
Member

the code would help to know where the error lies

@ogulcanarbc
Copy link
Author

ogulcanarbc commented Jan 24, 2025

Showing top 10 nodes out of 99
      flat  flat%   sum%        cum   cum%
   88.44MB 54.69% 54.69%    88.44MB 54.69%  go.opentelemetry.io/otel/internal/global.(*tracerProvider).Tracer
   53.51MB 33.09% 87.78%    53.51MB 33.09%  github.com/imroc/req/v3.parseRequestURL
    6.17MB  3.82% 91.60%     7.24MB  4.48%  compress/flate.NewWriter
    2.86MB  1.77% 93.37%     2.86MB  1.77%  github.com/klauspost/compress/flate.newFastEnc (inline)
    1.27MB  0.79% 94.15%     1.27MB  0.79%  github.com/valyala/fasthttp/stackless.NewFunc
    1.07MB  0.66% 94.82%     1.07MB  0.66%  compress/flate.(*compressor).initDeflate (inline)
    1.01MB  0.62% 95.44%     1.01MB  0.62%  bufio.NewReaderSize
       1MB  0.62% 96.06%        1MB  0.62%  regexp/syntax.(*compiler).inst
    0.98MB   0.6% 96.66%     0.98MB   0.6%  github.com/couchbase/gocbcore/v10.newStdCleaner
    0.64MB   0.4% 97.06%     4.03MB  2.49%  github.com/klauspost/compress/flate.NewWriter

It might be more understandable if I share it like this, open telemetry implementated but not enable

func (h Handler) Handler(ctx *fiber.Ctx) error {
	start := time.Now()
	defer func() {
		duration := time.Since(start)
		h.metricCollector.AddRestOperationDuration("agreement_printcargoprovideragreement", duration.Milliseconds())
	}()

	....
}

func (p *PrometheusMetricCollector) Register(opts ...func(*MetricOptions)) {
	buckets := []float64{...,...}

	opt := &MetricOptions{}

	for _, o := range opts {
		o(opt)
	}

	registry := prometheus.NewRegistry()
	registry.MustRegister(
		collectors.NewGoCollector(collectors.WithGoCollectorRuntimeMetrics(collectors.MetricsAll)),
		collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
	)

	p.registerRestOpsDurationHistogram(registry, opt.AppName, opt.RestDurationCollectorEnabled, buckets)


		registeredCollectors[RestOpsDuration.GetName()] = restOpsDurationHistogram
		registry.MustRegister(restOpsDurationHistogram)
	}
}
func (p *PrometheusMetricCollector) registerRestOpsDurationHistogram(registry *prometheus.Registry, appName string, enabled bool, buckets []float64) {
		if enabled {
		restOpsDurationHistogram := prometheus.NewHistogramVec(prometheus.HistogramOpts{
			Name:        RestOpsDuration.GetName(),
			Help:        "",
			ConstLabels: prometheus.Labels{"app": appName},
		}, []string{"operation"})
	}
}

@ReneWerner87
Copy link
Member

have you removed these middlewares and tested whether the problem still exists?

@ogulcanarbc
Copy link
Author

No yet. I have these options, but I wanted to brainstorm other options because they are used in many places.

@ReneWerner87
Copy link
Member

ReneWerner87 commented Jan 24, 2025

right, but to find out in which middleware and which code place the cause lies we would have to do this

@ogulcanarbc
Copy link
Author

thanks for your support I will continue to issue via opentelemetry-go

@ReneWerner87
Copy link
Member

ReneWerner87 commented Jan 24, 2025

thanks for your support I will continue to issue via opentelemetry-go

right best in the source code repository where the leak was created, thank you for the report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants