-
Notifications
You must be signed in to change notification settings - Fork 10
Description
I'm using Analytics-CSharp v2.5.0 in an ASP.Net web application. Dependency injection setup is:
services.AddScoped(sp =>
{
var persistentDataPath = Path.GetTempPath();
var logger = sp.GetRequiredService<IDatadogLogger>();
logger.LogInformation($"Segment Analytics is configured with persistentDataPath: {persistentDataPath}");
var analytics = new Analytics(new Segment.Analytics.Configuration(
settings.BusinessEvents.WriteKey,
apiHost: settings.BusinessEvents.ApiHost,
flushAt: settings.BusinessEvents.FlushAtEventCount,
flushInterval: settings.BusinessEvents.FlushIntervalSec,
storageProvider: new DefaultStorageProvider(persistentDataPath)));
analytics.Add(new UserIdEnrichmentPlugin());
Analytics.Logger = sp.GetRequiredService<SegmentLogger>();
return analytics;
});
FlushAt=20 and FlushInterval=30 in this scenario.
The SegmentLogger
class implements ISegmentLogger and forwards to DataDog. The UserIdEnrichmentPlugin
mutates the RawEvents by lifting a userId in properties to the root of the event. (Neither should be relevant to the IO exception issue. I'm just describing them here for completeness.)
I have verified that file system permissions are correct, and the majority of the events we emit via Analytics.Track() are being received; however, we have roughly 16% event loss (never received), each corresponding with an error log similar to:
System.IO.IOException: The process cannot access the file '/tmp/segment.data/VZiEGasTFTYm557jKVM0Z8d38XxRuns8/events/VZiEGasTFTYm557jKVM0Z8d38XxRuns8-0' because it is being used by another process.
at Microsoft.Win32.SafeHandles.SafeFileHandle.Init(String path, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, Int64& fileLength, UnixFileMode& filePermissions)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String fullPath, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, UnixFileMode openPermissions, Int64& fileLength, UnixFileMode& filePermissions, Boolean failForSymlink, Boolean& wasSymlink, Func`4 createOpenException)
at System.IO.Strategies.OSFileStreamStrategy..ctor(String path, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, Nullable`1 unixCreateMode)
at System.IO.FileInfo.Open(FileMode mode, FileAccess access)
at Segment.Analytics.Utilities.FileEventStream.OpenOrCreate(String file, Boolean& newFile)
at Segment.Analytics.Utilities.Storage.<>c__DisplayClass32_0.<<StoreEvent>b__0>d.MoveNext()
--- End of stack trace from previous location ---
at Segment.Analytics.Utilities.Storage.WithLock(Func`1 block)
at Segment.Analytics.Utilities.Storage.StoreEvent(String event)
at Segment.Analytics.Utilities.Storage.Write(StorageConstants key, String value)
at Segment.Analytics.Utilities.EventPipeline.<Write>b__23_0()
This ASP.Net application should be the only process accessing /tmp/segment.*
directories. (FWIW, this is a Docker image running on Kubernetes/AKS, using a base image of mcr.microsoft.com/dotnet/aspnet:8.0-jammy).
We're likely going to switch to InMemoryStorageProvider with flushAt=1 as a short-term "fix" to avoid losing events, but I'd like to understand the root cause of the System.IO.IOException. Do you have any suggestions on how to determine whether it's a defect in the storage provider versus an environment/host issue?
Thanks!