-
Notifications
You must be signed in to change notification settings - Fork 5
Erldns ci #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Erldns ci #17
Conversation
src/dnstest_reporter.erl
Outdated
{Pass, Fail} = lists:partition(fun(#{result := Result}) -> true =:= Result end, TestResults), | ||
?LOG_INFO(#{ | ||
passed_num => length(Pass), | ||
passed => lists:map(fun(#{name := Name}) -> Name end, Pass), | ||
failed_num => length(Fail), | ||
failed => lists:map(fun(#{name := Name}) -> Name end, Fail) | ||
}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has become very hard to read the output now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could remove the failed/passed elements, it might probably make the output smaller, and also configure the logger to not output timestamps as they're not needed. However one thing I was missing when reading out the output is knowing why a record was failing so hopefully the lists:foreach(fun(Return) -> ?LOG_INFO(Return) end, Fail).
below can help with that (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I see for the overall test suite run:
2025-04-15T13:10:52.207543+07:00 info: failed: ['8_bit_txt',any_query,apex_level_a_but_no_a,apex_level_ns,basic_ns_resolution,cname_and_wildcard_but_no_correct_type,cname_and_wildcard,direct_dnskey,double_srv,ent_rr_enclosed_in_ent,escaped_txt_1,glue_record,glue_referral,internal_referral,long_name,mx_case_sensitivity_with_ap,mx_with_simple_additional_processing,naptr,ns_at_delegation,ns_with_identical_glue,root_cname,root_mx,root_ns,rp,very_long_text,ns_zonecut,ns_zonecut_child_cname,ns_recursion_breakout,cname_wildcard_cover,caa_record,cname_to_nxdomain_any_dnssec,cname_to_unauth_any_dnssec,cname_to_unauth_dnssec,direct_dnskey_dnssec,double_dnssec,dnssec_soa,dnssec_ns,dnssec_a,dnssec_a_with_udp_payload_size,dnssec_cname,dnssec_follow_cname,dnssec_cds,dnssec_cdnskey,dnssec_dnskey,nsec_name,nsec_name_mixed_case,nsec_type,nsec_name_any,nsec_type_any,nsec_rr_type_bitmap_wildcard,cname_to_nxdomain_dnssec,cname_wildcard_chain_dnssec], failed_num: 52, passed: [any_nxdomain,any_wildcard,apex_level_a,basic_a_resolution,basic_aaaa_resolution,basic_hinfo,basic_soa_resolution,basic_srv_resolution,basic_txt_resolution,cname_and_wildcard_at_root,cname_but_no_correct_type,cname_loop_breakout,cname_to_nxdomain_any,cname_to_nxdomain,cname_to_referral,cname_to_unauth_any,cname_to_unauth,cname_wildcard_chain,cross_domain_cname_to_wildcard,direct_wildcard,double,ds_at_apex_noerror,ent_wildcard_below_ent,external_cname_pointer,five_levels_wildcard_one_below_apex,five_levels_wildcard,internal_referral_glue,multi_step_cname_resolution,multi_txt_escape_resolution,multi_txt_resolution,mx_to_cname,non_existing_record_other_types_exist_ns,non_existing_record_other_types_exist,nx_domain_for_unknown_record,obscured_wildcard,one_step_cname_resolution,out_of_baliwick_referral,pretty_big_packet,root_srv,same_level_referral_soa,same_level_referral,too_big_for_udp_query_no_truncate_additional,too_big_for_udp_query,unknown_domain,wildcard_overlaps_delegation,wrong_type_wildcard,ns_a_record,ns_aaaa_record,cname_case,direct_rrsig], passed_num: 50
As per the output we have 50 PASSING test and 52 FAILING tests.
Realistically for this to be usable we need to for example exit with 1 if we have failures. And ignoring any of the other issues that the test suite might have, I image we want to have a skip
or todo
list so that specs that are failing will be ignored when we decide if the run is a success or failure.
@DXTimer added some logic to halt with a return status when no tests failed (0) or when any test failed (1). I've prioritise returning data structures that I can use from |
Relates to https://github.com/dnsimple/erldnsimple/issues/426
The changes are massive.
lager
and uselogger
, this is unrelated to prod and this cleanup was just a few lines of code.zones.json
file. The diff is huge because of indentation (half of the file was entirely indented to the left), actual changes are the following:{"comment", "insert stuff"}
object that doesn't adhere to the stackdname
instead ofip
, one MX record to use preference in its data instead of its own dname, and add a missingkeytag
{name, {{key1, _},{key2, _},{key3, _}}}
, which is too position dependant and error-prone to pattern match. That large file containing the definitions now defines them using maps, as in{name, #{key1 => _, key2 => _, key3 => _}}
, that way pattern matching is more flexible (and dialyzer happier).dnstest_harness:run/1,2
and how it is used in Add dnstest suite to CI erldns#197.