-
Notifications
You must be signed in to change notification settings - Fork 148
chore: add logger write through evaluate advise #895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @Yzing, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the evaluation framework by integrating robust logging capabilities using the 'winston' library. It introduces detailed error and success logging for the data extraction and chart advising phases, writing these records to structured JSON Lines files. Additionally, the evaluation process has been optimized for asynchronous operations, and the success criteria for tests have been raised, aiming for improved reliability, performance monitoring, and a higher quality bar for the system's output. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces logging for the advise evaluation tests and refactors the test execution. The changes are a good step towards better diagnostics. However, I've identified a few significant issues with the new test structure in __tests__/evaluation/advise/shared.ts. The primary concern is a flawed use of Promises with Jest, which will not wait for tests to complete as expected. Additionally, the summary test is misplaced, leading to redundant definitions. I've also pointed out a couple of medium-severity issues related to potential runtime errors from unsafe environment variable access and division by zero. My suggestions aim to correct the test execution flow to be robust and reliable while keeping the new logging functionality.
| const evaluateChartAdvise = async (chartId: string) => { | ||
| const dataset = loadDataset(chartId); | ||
|
|
||
| dataset.forEach((data: TestData, i: number) => { | ||
| it(`evaluate ${chartId} case ${i}`, async () => { | ||
| console.log(`evaluate ${chartId} case ${i}`); | ||
| const { answer } = data; | ||
| const question = selectQuestion(data); | ||
| const dataShards = await ava.extract(question); | ||
| const advises = await ava.advise(dataShards); | ||
| const { spec } = advises?.[0]?.charts?.[0] || {}; | ||
| const success = isPass(spec, answer); | ||
| expect(success).toEqual(true); | ||
| const promises = dataset.map((data: TestData, i: number) => { | ||
| return new Promise((resolve) => { | ||
| it(`evaluate ${chartId} case ${i}`, async () => { | ||
| console.log(`evaluate ${chartId} case ${i}`); | ||
| const { answer } = data; | ||
| const question = selectQuestion(data); | ||
| let dataShards = []; | ||
| try { | ||
| dataShards = await ava.extract(question); | ||
| } catch (e) { | ||
| logger.error({ | ||
| msg: 'extract error', | ||
| input: question, | ||
| }); | ||
| } | ||
| if (dataShards.length === 0) { | ||
| logger.error({ | ||
| msg: 'extract empty', | ||
| input: question, | ||
| }); | ||
| resolve(null); | ||
| return; | ||
| } | ||
| try { | ||
| const advises = await ava.advise(dataShards); | ||
| const { spec } = advises?.[0]?.charts?.[0] || {}; | ||
| logger.info({ | ||
| msg: 'advise success', | ||
| input: question, | ||
| dataShards, | ||
| output: spec, | ||
| source: answer, | ||
| }); | ||
| } catch (e) { | ||
| logger.error({ | ||
| msg: 'advise error', | ||
| input: question, | ||
| }); | ||
| } | ||
| resolve(null); | ||
| }); | ||
| }); | ||
| }); | ||
|
|
||
| await Promise.all(promises); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current test execution logic using new Promise(resolve => it(...)) and await Promise.all(...) is an anti-pattern in Jest and does not work as intended. The await will not wait for the asynchronous tests inside the it blocks to complete because the promises resolve as soon as the it block is defined, not when it finishes running. This will cause the subsequent summary test to run prematurely on incomplete data.
The evaluateChartAdvise function should not be async. Tests should be defined synchronously in a loop, and Jest's test runner will handle their execution correctly.
const evaluateChartAdvise = (chartId: string) => {
const dataset = loadDataset(chartId);
dataset.forEach((data: TestData, i: number) => {
it(`evaluate ${chartId} case ${i}`, async () => {
console.log(`evaluate ${chartId} case ${i}`);
const { answer } = data;
const question = selectQuestion(data);
let dataShards = [];
try {
dataShards = await ava.extract(question);
} catch (e) {
logger.error({
msg: 'extract error',
input: question,
});
}
if (dataShards.length === 0) {
logger.error({
msg: 'extract empty',
input: question,
});
return;
}
try {
const advises = await ava.advise(dataShards);
const { spec } = advises?.[0]?.charts?.[0] || {};
logger.info({
msg: 'advise success',
input: question,
dataShards,
output: spec,
source: answer,
});
} catch (e) {
logger.error({
msg: 'advise error',
input: question,
});
}
});
});
await Promise.all(promises);| it('evaluate pass rate should >= 0.95', () => { | ||
| const data = fs.readFileSync(loggerPath?.info); | ||
| const lines = data | ||
| .toString() | ||
| .split('\n') | ||
| .filter((v) => !!v.length); | ||
| const passCount = lines.filter((line) => { | ||
| try { | ||
| const log = JSON.parse(line); | ||
| return isPass(log.spec, log.source); | ||
| } catch (e) { | ||
| return false; | ||
| } | ||
| }).length; | ||
| console.log('pass rate: ', passCount / lines.length); | ||
| expect(passCount / lines.length).toBeGreaterThanOrEqual(0.95); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This it block for calculating the pass rate is misplaced inside the evaluateChartAdvise function. Since this function is called in a loop for each chartId, it results in multiple tests being created with the same name, which is incorrect and can lead to confusing test reports. This summary test should be defined only once and run after all the individual evaluation tests have completed.
This block should be removed from here and a single, corrected version should be placed inside the describe('evaluation advise', ...) block, after the chartIds.forEach(evaluateChartAdvise); call.
When you move this logic, please also address two issues within it:
- Incorrect File Path: The log file is read with
fs.readFileSync(loggerPath?.info). This will fail if no logger path is provided. It should use theinfoLogPathvariable. - Division by Zero Risk: The calculation
passCount / lines.lengthcan result inNaNiflines.lengthis 0. This should be handled to avoid an unexpected test failure (e.g.,const rate = lines.length > 0 ? passCount / lines.length : 0;).
| const ava = new AVA({ | ||
| llm: { | ||
| appId: process.env.TBOX_APP_ID!, | ||
| authorization: process.env.TBOX_AUTHORIZATION!, | ||
| appId: process.env.TBOX_LLM_APP_ID!, | ||
| authorization: process.env.TBOX_LLM_AUTH!, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the non-null assertion operator (!) for environment variables is risky. If TBOX_LLM_APP_ID or TBOX_LLM_AUTH are not set in the test environment, the process will crash with a TypeError. It's safer to perform an explicit check and throw a more informative error to help debug setup issues.
const { TBOX_LLM_APP_ID, TBOX_LLM_AUTH } = process.env;
if (!TBOX_LLM_APP_ID || !TBOX_LLM_AUTH) {
// Throw an error if the required environment variables are not set.
// This provides a clear failure message instead of a runtime crash.
throw new Error('Missing required environment variables: TBOX_LLM_APP_ID and TBOX_LLM_AUTH');
}
const ava = new AVA({
llm: {
appId: TBOX_LLM_APP_ID,
authorization: TBOX_LLM_AUTH,
},
});
PR includes