|
1 |
| -INFO:main:Namespace(sut_server=['http://10.0.0.14:8008', 'http://10.0.0.12:8008'], dataset='coco-1024', dataset_path='/root/CM/repos/local/cache/61dd835801c542a3/install', profile='stable-diffusion-xl-pytorch', scenario='Offline', max_batchsize=1, threads=1, accuracy=True, find_peak_performance=False, backend='pytorch', model_name='stable-diffusion-xl', output='/root/CM/repos/local/cache/d549713c4a534705/test_results/aqua-reference-rocm-pytorch-v2.6.0.dev20241118-scc24-main/stable-diffusion-xl/offline/accuracy', qps=None, model_path='/root/CM/repos/local/cache/c4b6bbbebe504f28/stable_diffusion_fp16', dtype='fp16', device='cuda', latent_framework='torch', mlperf_conf='mlperf.conf', user_conf='/root/CM/repos/mlcommons@cm4mlops/script/generate-mlperf-inference-user-conf/tmp/6626c9658bff4d2291e3121038a4cfca.conf', audit_conf='audit.config', ids_path='/root/CM/repos/local/cache/61dd835801c542a3/install/sample_ids.txt', time=None, count=10, debug=False, performance_sample_count=5000, max_latency=None, samples_per_query=8) |
| 1 | +INFO:main:Namespace(sut_server=['http://10.0.0.14:8008', 'http://10.0.0.12:8008'], dataset='coco-1024', dataset_path='/root/CM/repos/local/cache/61dd835801c542a3/install', profile='stable-diffusion-xl-pytorch', scenario='Offline', max_batchsize=1, threads=1, accuracy=True, find_peak_performance=False, backend='pytorch', model_name='stable-diffusion-xl', output='/root/CM/repos/local/cache/d549713c4a534705/test_results/aqua-reference-rocm-pytorch-v2.6.0.dev20241118-scc24-main/stable-diffusion-xl/offline/accuracy', qps=None, model_path='/root/CM/repos/local/cache/c4b6bbbebe504f28/stable_diffusion_fp16', dtype='fp16', device='cuda', latent_framework='torch', mlperf_conf='mlperf.conf', user_conf='/root/CM/repos/mlcommons@cm4mlops/script/generate-mlperf-inference-user-conf/tmp/1608e150c4d94edb9537a0fe9198425f.conf', audit_conf='audit.config', ids_path='/root/CM/repos/local/cache/61dd835801c542a3/install/sample_ids.txt', time=None, count=10, debug=False, performance_sample_count=5000, max_latency=None, samples_per_query=8) |
2 | 2 | WARNING:backend-pytorch:Model path not provided, running with default hugging face weights
|
3 | 3 | This may not be valid for official submissions
|
4 | 4 | Keyword arguments {'safety_checker': None} are not expected by StableDiffusionXLPipeline and will be ignored.
|
5 | 5 | Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Using the `SDPA` attention implementation on multi-gpu setup with ROCM may lead to performance issues due to the FA backend. Disabling it to use alternative backends.
|
6 |
| -Loading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 12.44it/s]Loading pipeline components...: 86%|████████▌ | 6/7 [00:00<00:00, 7.56it/s]Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 9.15it/s] |
7 |
| -RETURNED from requests.post on predict at time 1731969667.0400865 |
| 6 | +Loading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 16.65it/s]Loading pipeline components...: 86%|████████▌ | 6/7 [00:00<00:00, 11.90it/s]Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 10.73it/s] |
| 7 | +:::MLLOG {"key": "error_invalid_config", "value": "Multiple conf files are used. This is not valid for official submission.", "time_ms": 1732142436869.237178, "namespace": "mlperf::logging", "event_type": "POINT_IN_TIME", "metadata": {"is_error": true, "is_warning": false, "file": "test_settings_internal.cc", "line_no": 539, "pid": 30316, "tid": 30316}} |
| 8 | +RETURNED from requests.post on predict at time 1732142751.4806168 |
8 | 9 | BEFORE lg.QuerySamplesComplete(response)
|
9 | 10 | AFTER lg.QuerySamplesComplete(response)
|
10 |
| -RETURNED from requests.post on predict at time 1731969689.698808 |
| 11 | +RETURNED from requests.post on predict at time 1732142752.913671 |
11 | 12 | BEFORE lg.QuerySamplesComplete(response)
|
12 | 13 | AFTER lg.QuerySamplesComplete(response)
|
|
0 commit comments