Skip to content

fix: update entrypoint to use Gunicorn instead of uWSGI#34937

Open
tomchon wants to merge 5 commits intomainfrom
enh/add-gunicorn-for-tdgpt-2
Open

fix: update entrypoint to use Gunicorn instead of uWSGI#34937
tomchon wants to merge 5 commits intomainfrom
enh/add-gunicorn-for-tdgpt-2

Conversation

@tomchon
Copy link
Contributor

@tomchon tomchon commented Mar 25, 2026

close:https://project.feishu.cn/taosdata_td/job/detail/6829905155

This pull request introduces several important changes to improve backend service startup, configuration, and machine learning model loading. The main updates include switching the service process manager from uWSGI to Gunicorn, ensuring proper Gunicorn configuration, and enhancing Keras backend handling for anomaly detection models.

Service startup and configuration:

  • The Docker entrypoint script (entrypoint.sh) now starts the backend using Gunicorn instead of uWSGI and expects the configuration file to be taosanode.config.py instead of taosanode.ini. The working directory is also explicitly set before launching Gunicorn.
  • The Gunicorn configuration (taosanode.config.py) now ensures the threads parameter is always an integer by using integer division (//), which is required by Gunicorn.
  • A new test verifies that the Gunicorn threads config value is indeed an integer and at least 2, preventing misconfiguration.

Machine learning backend improvements:

  • The autoencoder anomaly detection module (autoencoder.py) now lazily imports Keras, preferring the Torch backend if TensorFlow is not available. This allows the service to start even if TensorFlow is intentionally skipped. The import logic clears any failed Keras imports before retrying with the new backend. [1] [2]
  • The environment variable for the Keras backend is now set to "torch" by default in the configuration module (conf.py), improving compatibility across environments.

@tomchon tomchon requested a review from feici02 as a code owner March 25, 2026 12:06
Copilot AI review requested due to automatic review settings March 25, 2026 12:06
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the Docker entrypoint script to replace uWSGI with Gunicorn as the primary application server. This change involves modifying the startup command and updating the configuration file path to align with Gunicorn's requirements, aiming to improve the application's deployment and runtime environment.

Highlights

  • Application Server Switch: The Docker entrypoint script entrypoint.sh was modified to transition from using uWSGI as the application server to Gunicorn.
  • Configuration Update: The application's configuration file path was updated from taosanode.ini to taosanode.config.py to reflect the change to Gunicorn's Python-based configuration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the TDgpt Docker entrypoint to start the web service with Gunicorn (Python config) instead of uWSGI (INI config), aligning container startup with the Gunicorn-based runtime.

Changes:

  • Switch entrypoint config from taosanode.ini (uWSGI) to taosanode.config.py (Gunicorn).
  • Replace uWSGI startup command with a Gunicorn startup command and set working directory to the taosanalytics package.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request migrates the Docker entrypoint script from using uWSGI to Gunicorn, updating the configuration file path and the execution command. A review comment suggests that the cd command introduced before the Gunicorn execution might be redundant, as the pythonpath is already configured, and its removal would simplify the script.

Copilot AI review requested due to automatic review settings March 25, 2026 14:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +40 to +41
RUN python3.10 -m venv --system-site-packages $VIRTUAL_ENV && \
ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn && \
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn does not ensure Gunicorn runs with the virtualenv interpreter; the copied console script will keep its original shebang (system Python). This can break at runtime if the service needs packages installed only into the venv (e.g., the venv installs transformers==4.40 later). Prefer installing Gunicorn into the venv (or generating a wrapper that uses $VIRTUAL_ENV/bin/python -m gunicorn) instead of symlinking the system script.

Suggested change
RUN python3.10 -m venv --system-site-packages $VIRTUAL_ENV && \
ln -sf /usr/local/bin/gunicorn $VIRTUAL_ENV/bin/gunicorn && \
RUN python3.10 -m venv --system-site-packages "$VIRTUAL_ENV" && \
printf '%s\n' '#!/usr/bin/env sh' "exec \"$VIRTUAL_ENV/bin/python\" -m gunicorn \"\$@\"" > "$VIRTUAL_ENV/bin/gunicorn" && \
chmod +x "$VIRTUAL_ENV/bin/gunicorn" && \

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings March 26, 2026 03:45
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

import platform
import os.path

os.environ.setdefault("KERAS_BACKEND", "torch")
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting KERAS_BACKEND at module import time makes the torch backend the default for all code paths that import taosanalytics.conf (including environments where TensorFlow backend is desired/available). If the intent is only to default to torch when TensorFlow is not usable, consider making this conditional (e.g., only when tensorflow is absent/unimportable) or moving it into the specific Keras-using modules to avoid a global side effect during config/logging initialization.

Suggested change
os.environ.setdefault("KERAS_BACKEND", "torch")
# Only default to torch backend when KERAS_BACKEND is not set
# and TensorFlow is not available. This avoids a global side effect
# for environments that prefer the TensorFlow backend.
if "KERAS_BACKEND" not in os.environ:
tf_spec = importlib.util.find_spec("tensorflow")
if tf_spec is None:
os.environ["KERAS_BACKEND"] = "torch"

Copilot uses AI. Check for mistakes.
Comment on lines 7 to +10
import os.path

os.environ.setdefault("KERAS_BACKEND", "torch")

Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR title/description is about switching the container entrypoint from uWSGI to Gunicorn, but this change also introduces Keras backend selection during taosanalytics.conf import. If this is required for the Gunicorn/preload startup path, please document that connection in the PR description (or split the backend change into a separate PR) so reviewers/operators understand the behavioral change.

Copilot uses AI. Check for mistakes.
"cfg",
"taosanode.config.py",
)
spec = importlib.util.spec_from_file_location("taosanode_config", config_path)
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spec_from_file_location() can return None, and spec.loader may be None; calling spec.loader.exec_module(...) will then raise an AttributeError and make the test fail with a confusing error. Add assertions/guards for spec is not None and spec.loader is not None before creating the module and executing it.

Suggested change
spec = importlib.util.spec_from_file_location("taosanode_config", config_path)
spec = importlib.util.spec_from_file_location("taosanode_config", config_path)
assert spec is not None
assert spec.loader is not None

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants