-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issues on new install on kubernetes v1.16.2 #1387
Comments
That MeteringConfig custom resource is a bit more involved than what I would expect, but this seems fine to me at a glance. We override a lot of those fields you've configured (like |
running on bare metal
|
for peace of mind, I tried to deploying on gke if that's the case, then potentially I need to exchange the cni on our end |
would nfs storage for hive-metastore-db-data be a problem? |
In theory, it should be fine, but Openshift as a whole has avoided recommending that as a storage for any of the components and we've followed their suit on that. I really haven't done any load testing on using NFS as a storage backend, but no problem has arisen in our e2e suite or local tests that would indicate it cannot be used for metering. |
cool, I tried swapping out derby for postgres but unsure what version to use, tried the chart from bitnami using tags taken from hive-metastore
|
@jecho yeah, unfortunately, that's a known error on our side. It seems like the JDBC driver we use in the hive image is out-of-date, and I don't know off-hand what version of Postgresql will work out of the box. It looks like it's hanging on the create table call we're making internally. Can you try and delete the hive-server-0 Pod and see if that changes anything? |
@timflannagan1 after kicking the hive-server, hive-metastore seems to try to create the next 2 and then settles? hive-metastore
reporting-operator pod
|
Sorry for the slow response - do you still have the hive server logs? We have someone on the team looking into the Postgres bug now. |
I do not. I can redeploy and grab the log and stacktrace for this Postgres bug I did end up trying to deploy with _mysq_l at one point however. The current container images
lmk if I should provide anything else, such as a kubeadm dump or what not? and thanks guys! |
@jecho I would stay away from the 4.6/4.7 tags (as we're stuck on openshift's release cycle, at least for now, and those tags are mirrored until the 4.6 release has reached GA) as we recently had to migrate everything to RHEL8 base images, which is causing some issues, like as you pointed out the postgres jdbc driver not being currently configured in the classpath. I've been pretty bogged up this week wrapping up other work, so I still need to try and spin up a non-openshift cluster installation and try to reproduce this error. I imagine it's permissions related, but I haven't dug into that outstanding Presto issue either. |
I'm not sure if this is directly related to #1122, but I am getting quite a few of these errors post install in reporting-operator
error="failed to store Prometheus metrics into table hive.metering.datasource_metering_persistentvolumeclaim_request_bytes for the range 2020-08-28 10:32:00 +0000 UTC to 2020-08-28 10:37:00 +0000 UTC: failed to store metrics into presto: presto: query failed (200 OK): \"io.prestosql.spi.PrestoException: Failed checking path: {redacted}
so basically ReportDataSource will always be empty. I've used both s3 and the sharedPVC type and yields same results. Curious if maybe its my configuration, unless its something with the current version of presto.
I'm currently using 4.7. I was encountering the python issue in presto on startup for previous releases.
this is my configuration
The text was updated successfully, but these errors were encountered: