Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 75 additions & 3 deletions payjoin-cli/src/app/v2/ohttp.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,16 @@
use std::fs;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
use std::time::{Duration, SystemTime};

use anyhow::{anyhow, Result};
use serde::{Deserialize, Serialize};

use super::Config;

// 6 months
const CACHE_DURATION: Duration = Duration::from_secs(6 * 30 * 24 * 60 * 60);
Comment on lines +11 to +12
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a pretty long duration to hard code. it seems more appropriate to allow OHTTP gateways to control this by using the cache control mechanisms defined in HTTP, so that operators can determine their own policies?


#[derive(Debug, Clone)]
pub struct RelayManager {
selected_relay: Option<url::Url>,
Expand Down Expand Up @@ -39,7 +46,6 @@ pub(crate) async fn unwrap_ohttp_keys_or_else_fetch(
} else {
println!("Bootstrapping private network transport over Oblivious HTTP");
let fetched_keys = fetch_ohttp_keys(config, directory, relay_manager).await?;

Ok(fetched_keys)
}
}
Expand Down Expand Up @@ -75,6 +81,17 @@ async fn fetch_ohttp_keys(
.expect("Lock should not be poisoned")
.set_selected_relay(selected_relay.clone());

// try cache for this selected relay first
if let Some(cached) = read_cached_ohttp_keys(&selected_relay) {
tracing::info!("using Cached keys for relay: {selected_relay}");
if !is_expired(&cached) && cached.relay_url == selected_relay {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would read_cached_ohttp_keys return an expired or keys for a different relay? Perphaps read_cached_ohttp_keys should return ValidateOhttpKeys?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thier is a possiblity of the cached keys being expired by the time we try to use. so it checks that the keys being rend hasn't elapsed that duration before using it

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the relay_url were the key the comparison would not be necessary, but instead it's just the host part of this key. i think that only causes collisions when different ports are used, see other comment

return Ok(ValidatedOhttpKeys {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returning short circuits here which removes the need to read from the cache a second time for writeback

ohttp_keys: cached.keys,
relay_url: cached.relay_url,
});
}
}

let ohttp_keys = {
#[cfg(feature = "_manual-tls")]
{
Expand All @@ -99,8 +116,17 @@ async fn fetch_ohttp_keys(
};

match ohttp_keys {
Ok(keys) =>
return Ok(ValidatedOhttpKeys { ohttp_keys: keys, relay_url: selected_relay }),
Ok(keys) => {
// Cache the keys if they are not already cached for this relay
if read_cached_ohttp_keys(&selected_relay).is_none() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this reads from the cache a second time unnecessarily

if let Err(e) = cache_ohttp_keys(&keys, &selected_relay) {
tracing::debug!(
"Failed to cache OHTTP keys for relay {selected_relay}: {e:?}"
);
}
}
return Ok(ValidatedOhttpKeys { ohttp_keys: keys, relay_url: selected_relay });
}
Err(payjoin::io::Error::UnexpectedStatusCode(e)) => {
return Err(payjoin::io::Error::UnexpectedStatusCode(e).into());
}
Expand All @@ -114,3 +140,49 @@ async fn fetch_ohttp_keys(
}
}
}

#[derive(Serialize, Deserialize, Debug)]
struct CachedOhttpKeys {
keys: payjoin::OhttpKeys,
relay_url: url::Url,
fetched_at: u64,
}

fn get_cache_file(relay_url: &url::Url) -> PathBuf {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are cached keys not being persisted in the database?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are persisted to file system and then get replaced with new keys if the cached one expires. i think it's pretty more straight forward than adding it to the db. on the file system keys are stored per-relay, each relay can have one key cache at a time

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already use sqlite so why not have a table with relay URL as the key and pubkey as the value? no need for serde, key mangling, or an additional bit of mutable storage to remember exists

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another note: keys correspond to directories, not relays. Might just be a mistype but it's a concrete distinction I felt necessary to repeat for certainty.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another note: keys correspond to directories, not relays. Might just be a mistype but it's a concrete distinction I felt necessary to repeat for certainty.

this is very important

dirs::cache_dir()
.unwrap()
.join("payjoin-cli")
.join(relay_url.host_str().unwrap())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not store relay_url in full so two gateways on the same host don't collide?

the only collision that can happen in practice given that we set things up to be on the top level of a single vhost, is the same host with different ports

.join("ohttp-keys.json")
}

fn read_cached_ohttp_keys(relay_url: &url::Url) -> Option<CachedOhttpKeys> {
let cache_file = get_cache_file(relay_url);
if !cache_file.exists() {
return None;
}
let data = fs::read_to_string(cache_file).ok().unwrap();
serde_json::from_str(&data).ok()
}

fn cache_ohttp_keys(ohttp_keys: &payjoin::OhttpKeys, relay_url: &url::Url) -> Result<()> {
let cached = CachedOhttpKeys {
keys: ohttp_keys.clone(),
relay_url: relay_url.clone(),
fetched_at: SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs(),
};

let serialized = serde_json::to_string(&cached)?;
let path = get_cache_file(relay_url);
fs::create_dir_all(path.parent().unwrap())?;
fs::write(path, serialized)?;
Ok(())
}

fn is_expired(cached_keys: &CachedOhttpKeys) -> bool {
let now = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap_or(Duration::ZERO)
.as_secs();
now.saturating_sub(cached_keys.fetched_at) > CACHE_DURATION.as_secs()
}
11 changes: 11 additions & 0 deletions payjoin-cli/tests/e2e.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,15 @@ mod e2e {
res
}

fn clear_payjoin_cache() -> std::io::Result<()> {
let cache_dir = dirs::cache_dir().unwrap().join("payjoin-cli");

if cache_dir.exists() {
std::fs::remove_dir_all(cache_dir)?;
}
Ok(())
}

#[cfg(feature = "v1")]
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn send_receive_payjoin_v1() -> Result<(), BoxError> {
Expand Down Expand Up @@ -203,6 +212,8 @@ mod e2e {
use tempfile::TempDir;
use tokio::process::Child;

clear_payjoin_cache()?;

type Result<T> = std::result::Result<T, BoxError>;

init_tracing();
Expand Down
Loading