fix: update entrypoint to use Gunicorn instead of uWSGI#34937
fix: update entrypoint to use Gunicorn instead of uWSGI#34937
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the Docker entrypoint script to replace uWSGI with Gunicorn as the primary application server. This change involves modifying the startup command and updating the configuration file path to align with Gunicorn's requirements, aiming to improve the application's deployment and runtime environment. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Pull request overview
Updates the TDgpt Docker entrypoint to start the web service with Gunicorn (Python config) instead of uWSGI (INI config), aligning container startup with the Gunicorn-based runtime.
Changes:
- Switch entrypoint config from
taosanode.ini(uWSGI) totaosanode.config.py(Gunicorn). - Replace uWSGI startup command with a Gunicorn startup command and set working directory to the taosanalytics package.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Code Review
This pull request migrates the Docker entrypoint script from using uWSGI to Gunicorn, updating the configuration file path and the execution command. A review comment suggests that the cd command introduced before the Gunicorn execution might be redundant, as the pythonpath is already configured, and its removal would simplify the script.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| RUN python3.10 -m venv --system-site-packages $VIRTUAL_ENV && \ | ||
| ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn && \ |
There was a problem hiding this comment.
ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn does not ensure Gunicorn runs with the virtualenv interpreter; the copied console script will keep its original shebang (system Python). This can break at runtime if the service needs packages installed only into the venv (e.g., the venv installs transformers==4.40 later). Prefer installing Gunicorn into the venv (or generating a wrapper that uses $VIRTUAL_ENV/bin/python -m gunicorn) instead of symlinking the system script.
| RUN python3.10 -m venv --system-site-packages $VIRTUAL_ENV && \ | |
| ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn && \ | |
| RUN python3.10 -m venv --system-site-packages "$VIRTUAL_ENV" && \ | |
| printf '%s\n' '#!/usr/bin/env sh' "exec \"$VIRTUAL_ENV/bin/python\" -m gunicorn \"\$@\"" > "$VIRTUAL_ENV/bin/gunicorn" && \ | |
| chmod +x "$VIRTUAL_ENV/bin/gunicorn" && \ |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| import platform | ||
| import os.path | ||
|
|
||
| os.environ.setdefault("KERAS_BACKEND", "torch") |
There was a problem hiding this comment.
Setting KERAS_BACKEND at module import time makes the torch backend the default for all code paths that import taosanalytics.conf (including environments where TensorFlow backend is desired/available). If the intent is only to default to torch when TensorFlow is not usable, consider making this conditional (e.g., only when tensorflow is absent/unimportable) or moving it into the specific Keras-using modules to avoid a global side effect during config/logging initialization.
| os.environ.setdefault("KERAS_BACKEND", "torch") | |
| # Only default to torch backend when KERAS_BACKEND is not set | |
| # and TensorFlow is not available. This avoids a global side effect | |
| # for environments that prefer the TensorFlow backend. | |
| if "KERAS_BACKEND" not in os.environ: | |
| tf_spec = importlib.util.find_spec("tensorflow") | |
| if tf_spec is None: | |
| os.environ["KERAS_BACKEND"] = "torch" |
| import os.path | ||
|
|
||
| os.environ.setdefault("KERAS_BACKEND", "torch") | ||
|
|
There was a problem hiding this comment.
The PR title/description is about switching the container entrypoint from uWSGI to Gunicorn, but this change also introduces Keras backend selection during taosanalytics.conf import. If this is required for the Gunicorn/preload startup path, please document that connection in the PR description (or split the backend change into a separate PR) so reviewers/operators understand the behavioral change.
| "cfg", | ||
| "taosanode.config.py", | ||
| ) | ||
| spec = importlib.util.spec_from_file_location("taosanode_config", config_path) |
There was a problem hiding this comment.
spec_from_file_location() can return None, and spec.loader may be None; calling spec.loader.exec_module(...) will then raise an AttributeError and make the test fail with a confusing error. Add assertions/guards for spec is not None and spec.loader is not None before creating the module and executing it.
| spec = importlib.util.spec_from_file_location("taosanode_config", config_path) | |
| spec = importlib.util.spec_from_file_location("taosanode_config", config_path) | |
| assert spec is not None | |
| assert spec.loader is not None |
close:https://project.feishu.cn/taosdata_td/job/detail/6829905155
This pull request introduces several important changes to improve backend service startup, configuration, and machine learning model loading. The main updates include switching the service process manager from uWSGI to Gunicorn, ensuring proper Gunicorn configuration, and enhancing Keras backend handling for anomaly detection models.
Service startup and configuration:
entrypoint.sh) now starts the backend using Gunicorn instead of uWSGI and expects the configuration file to betaosanode.config.pyinstead oftaosanode.ini. The working directory is also explicitly set before launching Gunicorn.taosanode.config.py) now ensures thethreadsparameter is always an integer by using integer division (//), which is required by Gunicorn.threadsconfig value is indeed an integer and at least 2, preventing misconfiguration.Machine learning backend improvements:
autoencoder.py) now lazily imports Keras, preferring the Torch backend if TensorFlow is not available. This allows the service to start even if TensorFlow is intentionally skipped. The import logic clears any failed Keras imports before retrying with the new backend. [1] [2]conf.py), improving compatibility across environments.