Step 7/10 : RUN pip3 install -r /app/config/tensorrt_conversion/requirements.txt ---> Running in ea27636dbe2b Collecting onnx==1.10.2 Downloading onnx-1.10.2-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (12.7 MB) Collecting onnxruntime==1.10.0 Downloading onnxruntime-1.10.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB) Collecting opencv-python==4.3.0.38 Downloading opencv_python-4.3.0.38-cp36-cp36m-manylinux2014_x86_64.whl (49.3 MB) Collecting numpy==1.18.5 Downloading numpy-1.18.5-cp36-cp36m-manylinux1_x86_64.whl (20.1 MB) Collecting torch==1.6.0 Downloading torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (748.8 MB) Collecting pycuda==2022.1 Downloading pycuda-2022.1.tar.gz (1.7 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing wheel metadata: started Preparing wheel metadata: finished with status 'done' Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from onnx==1.10.2->-r /app/config/tensorrt_conversion/requirements.txt (line 1)) (1.15.0) Collecting protobuf Downloading protobuf-3.19.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB) Collecting typing-extensions>=3.6.2.1 Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB) Collecting flatbuffers Downloading flatbuffers-22.12.6-py2.py3-none-any.whl (26 kB) Collecting future Downloading future-0.18.2.tar.gz (829 kB) Requirement already satisfied: pytools>=2011.2 in /usr/local/lib/python3.6/dist-packages (from pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (2020.4.3) Requirement already satisfied: appdirs>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (1.4.4) Requirement already satisfied: mako in /usr/local/lib/python3.6/dist-packages (from pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (1.1.3) Requirement already satisfied: dataclasses>=0.7; python_version <= "3.6" in /usr/local/lib/python3.6/dist-packages (from pytools>=2011.2->pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (0.7) Requirement already satisfied: decorator>=3.2.0 in /usr/local/lib/python3.6/dist-packages (from pytools>=2011.2->pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (4.4.2) Requirement already satisfied: MarkupSafe>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from mako->pycuda==2022.1->-r /app/config/tensorrt_conversion/requirements.txt (line 6)) (1.1.1) Building wheels for collected packages: pycuda, future Building wheel for pycuda (PEP 517): started Building wheel for pycuda (PEP 517): still running... Building wheel for pycuda (PEP 517): finished with status 'done' Created wheel for pycuda: filename=pycuda-2022.1-cp36-cp36m-linux_x86_64.whl size=630317 sha256=66766b16aed1ec5617b5f0223cfc5ca5a79a89558c7427d2db161f737c149466 Stored in directory: /root/.cache/pip/wheels/19/38/0f/fe701ae5ef6a52593f610c1d5ceec8dcbfae7dbcbd34518478 Building wheel for future (setup.py): started Building wheel for future (setup.py): finished with status 'done' Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491059 sha256=0f955cb18c9fd5db6f56cfc57fe191fb3cb2ea16ec9331a858f681882f8788d2 Stored in directory: /root/.cache/pip/wheels/6e/9c/ed/4499c9865ac1002697793e0ae05ba6be33553d098f3347fb94 Successfully built pycuda future Installing collected packages: protobuf, numpy, typing-extensions, onnx, flatbuffers, onnxruntime, opencv-python, future, torch, pycuda Attempting uninstall: numpy Found existing installation: numpy 1.19.2 Uninstalling numpy-1.19.2: Successfully uninstalled numpy-1.19.2 Attempting uninstall: pycuda Found existing installation: pycuda 2020.1 Uninstalling pycuda-2020.1: Successfully uninstalled pycuda-2020.1 Successfully installed flatbuffers-22.12.6 future-0.18.2 numpy-1.18.5 onnx-1.10.2 onnxruntime-1.10.0 opencv-python-4.3.0.38 protobuf-3.19.6 pycuda-2022.1 torch-1.6.0 typing-extensions-4.1.1 WARNING: You are using pip version 20.2.4; however, version 21.3.1 is available. You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command. Removing intermediate container ea27636dbe2b ---> a5eac53496a6 Step 8/10 : RUN python3.6 demo_darknet2onnx.py /app/config/tensorrt_conversion/input_volume/yolov4-tiny.cfg /app/config/tensorrt_conversion/input_volume/coco.names /app/config/tensorrt_conversion/input_volume/yolov4-tiny.weights /app/config/tensorrt_conversion/input_volume/512.jpg 1 ---> Running in c32c4d1103a0 9223372036854771712 150437888 151130112 197197824 371204096 351023104 351072256 341434368 341557248 /app/config/tensorrt_conversion/tool/darknet2pytorch.py:174: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! x = outputs[layers[0]][:, b // groups * group_id:b // groups * (group_id + 1)] /usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py:117: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! num = operator.index(num) /app/config/tensorrt_conversion/tool/yolo_layer.py:231: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32) /app/config/tensorrt_conversion/tool/yolo_layer.py:233: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32) Converting to onnx and running demo ... MEMORY LIMIT START OF MAIN BEFORE DARKNET layer filters size input output 0 conv 32 3 x 3 / 2 416 x 416 x 3 -> 208 x 208 x 32 1 conv 64 3 x 3 / 2 208 x 208 x 32 -> 104 x 104 x 64 2 conv 64 3 x 3 / 1 104 x 104 x 64 -> 104 x 104 x 64 3 route 2 4 conv 32 3 x 3 / 1 104 x 104 x 64 -> 104 x 104 x 32 5 conv 32 3 x 3 / 1 104 x 104 x 32 -> 104 x 104 x 32 6 route 5 4 7 conv 64 1 x 1 / 1 104 x 104 x 64 -> 104 x 104 x 64 8 route 2 7 9 max 2 x 2 / 2 104 x 104 x 128 -> 52 x 52 x 64 10 conv 128 3 x 3 / 1 52 x 52 x 64 -> 52 x 52 x 128 11 route 10 12 conv 64 3 x 3 / 1 52 x 52 x 128 -> 52 x 52 x 64 13 conv 64 3 x 3 / 1 52 x 52 x 64 -> 52 x 52 x 64 14 route 13 12 15 conv 128 1 x 1 / 1 52 x 52 x 128 -> 52 x 52 x 128 16 route 10 15 17 max 2 x 2 / 2 52 x 52 x 256 -> 26 x 26 x 128 18 conv 256 3 x 3 / 1 26 x 26 x 128 -> 26 x 26 x 256 19 route 18 20 conv 128 3 x 3 / 1 26 x 26 x 256 -> 26 x 26 x 128 21 conv 128 3 x 3 / 1 26 x 26 x 128 -> 26 x 26 x 128 22 route 21 20 23 conv 256 1 x 1 / 1 26 x 26 x 256 -> 26 x 26 x 256 24 route 18 23 25 max 2 x 2 / 2 26 x 26 x 512 -> 13 x 13 x 256 26 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 27 conv 256 1 x 1 / 1 13 x 13 x 512 -> 13 x 13 x 256 28 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 29 conv 255 1 x 1 / 1 13 x 13 x 512 -> 13 x 13 x 255 30 detection 31 route 27 32 conv 128 1 x 1 / 1 13 x 13 x 256 -> 13 x 13 x 128 33 upsample * 2 13 x 13 x 128 -> 26 x 26 x 128 34 route 33 23 35 conv 256 3 x 3 / 1 26 x 26 x 384 -> 26 x 26 x 256 36 conv 255 1 x 1 / 1 26 x 26 x 256 -> 26 x 26 x 255 37 detection Loading weights from /app/config/tensorrt_conversion/input_volume/yolov4-tiny.weights... Done! AFTER DARKNET AFTER EXPORT Onnx model exporting done BEFORE DARKNET layer filters size input output 0 conv 32 3 x 3 / 2 416 x 416 x 3 -> 208 x 208 x 32 1 conv 64 3 x 3 / 2 208 x 208 x 32 -> 104 x 104 x 64 2 conv 64 3 x 3 / 1 104 x 104 x 64 -> 104 x 104 x 64 3 route 2 4 conv 32 3 x 3 / 1 104 x 104 x 64 -> 104 x 104 x 32 5 conv 32 3 x 3 / 1 104 x 104 x 32 -> 104 x 104 x 32 6 route 5 4 7 conv 64 1 x 1 / 1 104 x 104 x 64 -> 104 x 104 x 64 8 route 2 7 9 max 2 x 2 / 2 104 x 104 x 128 -> 52 x 52 x 64 10 conv 128 3 x 3 / 1 52 x 52 x 64 -> 52 x 52 x 128 11 route 10 12 conv 64 3 x 3 / 1 52 x 52 x 128 -> 52 x 52 x 64 13 conv 64 3 x 3 / 1 52 x 52 x 64 -> 52 x 52 x 64 14 route 13 12 15 conv 128 1 x 1 / 1 52 x 52 x 128 -> 52 x 52 x 128 16 route 10 15 17 max 2 x 2 / 2 52 x 52 x 256 -> 26 x 26 x 128 18 conv 256 3 x 3 / 1 26 x 26 x 128 -> 26 x 26 x 256 19 route 18 20 conv 128 3 x 3 / 1 26 x 26 x 256 -> 26 x 26 x 128 21 conv 128 3 x 3 / 1 26 x 26 x 128 -> 26 x 26 x 128 22 route 21 20 23 conv 256 1 x 1 / 1 26 x 26 x 256 -> 26 x 26 x 256 24 route 18 23 25 max 2 x 2 / 2 26 x 26 x 512 -> 13 x 13 x 256 26 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 27 conv 256 1 x 1 / 1 13 x 13 x 512 -> 13 x 13 x 256 28 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 29 conv 255 1 x 1 / 1 13 x 13 x 512 -> 13 x 13 x 255 30 detection 31 route 27 32 conv 128 1 x 1 / 1 13 x 13 x 256 -> 13 x 13 x 128 33 upsample * 2 13 x 13 x 128 -> 26 x 26 x 128 34 route 33 23 35 conv 256 3 x 3 / 1 26 x 26 x 384 -> 26 x 26 x 256 36 conv 255 1 x 1 / 1 26 x 26 x 256 -> 26 x 26 x 255 37 detection Loading weights from /app/config/tensorrt_conversion/input_volume/yolov4-tiny.weights... Done! AFTER DARKNET AFTER EXPORT Onnx model exporting done END OF TRANSFORM The model expects input shape: [1, 3, 416, 416] Shape of the network input: (1, 3, 416, 416) ----------------------------------- max and argmax : 0.000535 nms : 0.001507 Post processing total : 0.002041 ----------------------------------- person: 0.499412 save plot results to predictions_onnx.jpg Removing intermediate container c32c4d1103a0 ---> bb4f0000a9d6 Step 9/10 : RUN trtexec --onnx=/app/config/tensorrt_conversion/yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_tiny.trt --fp16 --verbose ---> Running in 334c234488b6 &&&& RUNNING TensorRT.trtexec # trtexec --onnx=/app/config/tensorrt_conversion/yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_tiny.trt --fp16 --verbose [12/24/2022-07:18:06] [I] === Model Options === [12/24/2022-07:18:06] [I] Format: ONNX [12/24/2022-07:18:06] [I] Model: /app/config/tensorrt_conversion/yolov4_1_3_416_416_static.onnx [12/24/2022-07:18:06] [I] Output: [12/24/2022-07:18:06] [I] === Build Options === [12/24/2022-07:18:06] [I] Max batch: explicit [12/24/2022-07:18:06] [I] Workspace: 16 MiB [12/24/2022-07:18:06] [I] minTiming: 1 [12/24/2022-07:18:06] [I] avgTiming: 8 [12/24/2022-07:18:06] [I] Precision: FP32+FP16 [12/24/2022-07:18:06] [I] Calibration: [12/24/2022-07:18:06] [I] Refit: Disabled [12/24/2022-07:18:06] [I] Safe mode: Disabled [12/24/2022-07:18:06] [I] Save engine: yolov4_tiny.trt [12/24/2022-07:18:06] [I] Load engine: [12/24/2022-07:18:06] [I] Builder Cache: Enabled [12/24/2022-07:18:06] [I] NVTX verbosity: 0 [12/24/2022-07:18:06] [I] Tactic sources: Using default tactic sources [12/24/2022-07:18:06] [I] Input(s)s format: fp32:CHW [12/24/2022-07:18:06] [I] Output(s)s format: fp32:CHW [12/24/2022-07:18:06] [I] Input build shapes: model [12/24/2022-07:18:06] [I] Input calibration shapes: model [12/24/2022-07:18:06] [I] === System Options === [12/24/2022-07:18:06] [I] Device: 0 [12/24/2022-07:18:06] [I] DLACore: [12/24/2022-07:18:06] [I] Plugins: [12/24/2022-07:18:06] [I] === Inference Options === [12/24/2022-07:18:06] [I] Batch: Explicit [12/24/2022-07:18:06] [I] Input inference shapes: model [12/24/2022-07:18:06] [I] Iterations: 10 [12/24/2022-07:18:06] [I] Duration: 3s (+ 200ms warm up) [12/24/2022-07:18:06] [I] Sleep time: 0ms [12/24/2022-07:18:06] [I] Streams: 1 [12/24/2022-07:18:06] [I] ExposeDMA: Disabled [12/24/2022-07:18:06] [I] Data transfers: Enabled [12/24/2022-07:18:06] [I] Spin-wait: Disabled [12/24/2022-07:18:06] [I] Multithreading: Disabled [12/24/2022-07:18:06] [I] CUDA Graph: Disabled [12/24/2022-07:18:06] [I] Separate profiling: Disabled [12/24/2022-07:18:06] [I] Skip inference: Disabled [12/24/2022-07:18:06] [I] Inputs: [12/24/2022-07:18:06] [I] === Reporting Options === [12/24/2022-07:18:06] [I] Verbose: Enabled [12/24/2022-07:18:06] [I] Averages: 10 inferences [12/24/2022-07:18:06] [I] Percentile: 99 [12/24/2022-07:18:06] [I] Dump refittable layers:Disabled [12/24/2022-07:18:06] [I] Dump output: Disabled [12/24/2022-07:18:06] [I] Profile: Disabled [12/24/2022-07:18:06] [I] Export timing to JSON file: [12/24/2022-07:18:06] [I] Export output to JSON file: [12/24/2022-07:18:06] [I] Export profile to JSON file: [12/24/2022-07:18:06] [I] Cuda failure: CUDA driver version is insufficient for CUDA runtime version The command '/bin/sh -c trtexec --onnx=/app/config/tensorrt_conversion/yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_tiny.trt --fp16 --verbose' returned a non-zero code: 139 [Container] 2022/12/24 07:18:06 Command did not exit successfully docker build --no-cache --build-arg no_proxy=$no_proxy --build-arg NO_PROXY=$no_proxy --build-arg http_proxy=$http_proxy --build-arg HTTP_PROXY=$http_proxy --build-arg HTTPS_PROXY=$http_proxy --build-arg https_proxy=$http_proxy --build-arg RUNTIME_BASE=$RUNTIME_BASE --build-arg GPU=True --build-arg CUDNN_HALF=True --build-arg SQS_QUEUE_URL=$SQS_QUEUE_URL -t $REPOSITORY_URI:$STAGE . exit status 139 [Container] 2022/12/24 07:18:06 Phase complete: BUILD State: FAILED [Container] 2022/12/24 07:18:06 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build --no-cache --build-arg no_proxy=$no_proxy --build-arg NO_PROXY=$no_proxy --build-arg http_proxy=$http_proxy --build-arg HTTP_PROXY=$http_proxy --build-arg HTTPS_PROXY=$http_proxy --build-arg https_proxy=$http_proxy --build-arg RUNTIME_BASE=$RUNTIME_BASE --build-arg GPU=True --build-arg CUDNN_HALF=True --build-arg SQS_QUEUE_URL=$SQS_QUEUE_URL -t $REPOSITORY_URI:$STAGE .. Reason: exit status 139 [Container] 2022/12/24 07:18:07 Entering phase POST_BUILD [Container] 2022/12/24 07:18:07 Running command echo Build completed on `date` Build completed on Sat Dec 24 07:18:07 UTC 2022 [Container] 2022/12/24 07:18:07 Running command echo Pushing the Docker images... Pushing the Docker images...