* Add and push Dockerfile; add decentralized devstack settings Co-Authored-By: Diana Huang <dkh@edx.org> Co-Authored-By: Kyle McCormick <kmccormick@edx.org> * Remove Python requirements hack Remove the attempted optimization to the installation of Python package dependencies. The dependencies in edx-platform change about three times per day, so this was of dubious value. And because npm is run through nodeenv, which is a Python package, the Python dependencies installation has to happen first. * ARCHBOM-1439: Changing workdir to /edx/app/edxapp/edx-platform (#24835) Context: The Dockerfile tries to stay in sych with legacy stuff. In the ansible we configure the directory structure such that things relating to the app but not in the codebase, such as the env file wind up in /edx/app/edxapp/. And the codebase winds up in /edx/app/edxapp/edx-platform. I think due to accident, the dockerfile does /edx/app/edx-platform/edx-platform instead of /edx/app/edxapp/edx-platform. This commit tries to have Dockerfile more reflect what is currently happening in production * Update ports for decentralized devstack ARCHBOM-1447 (#24841) Switch from the LMS ports we've historically used for NGINX to those used for gunicorn, and fix the Studio ports to match the ones we've historically used for its gunicorn service. Also removed some leftover bits of the requirements hack. Co-authored-by: Adam Blackwell <ablackwell@edx.org> Co-authored-by: Diana Huang <dkh@edx.org> Co-authored-by: jinder1s <msingh@edx.org> Co-authored-by: Jeremy Bowman <jbowman@edx.org> Co-authored-by: Manjinder Singh <49171515+jinder1s@users.noreply.github.com>
49 lines
1.5 KiB
Python
49 lines
1.5 KiB
Python
"""
|
|
gunicorn configuration file: http://docs.gunicorn.org/en/stable/configure.html
|
|
"""
|
|
|
|
preload_app = False
|
|
timeout = 300
|
|
bind = "127.0.0.1:8010"
|
|
pythonpath = "/edx/app/edxapp/edx-platform"
|
|
max_requests = 50
|
|
workers = 7
|
|
|
|
|
|
def pre_request(worker, req):
|
|
worker.log.info("%s %s" % (req.method, req.path))
|
|
|
|
|
|
def close_all_caches():
|
|
"""
|
|
Close the cache so that newly forked workers cannot accidentally share
|
|
the socket with the processes they were forked from. This prevents a race
|
|
condition in which one worker could get a cache response intended for
|
|
another worker.
|
|
We do this in a way that is safe for 1.4 and 1.8 while we still have some
|
|
1.4 installations.
|
|
"""
|
|
from django.conf import settings
|
|
from django.core import cache as django_cache
|
|
if hasattr(django_cache, 'caches'):
|
|
get_cache = django_cache.caches.__getitem__
|
|
else:
|
|
get_cache = django_cache.get_cache # pylint: disable=no-member
|
|
for cache_name in settings.CACHES:
|
|
cache = get_cache(cache_name)
|
|
if hasattr(cache, 'close'):
|
|
cache.close()
|
|
|
|
# The 1.4 global default cache object needs to be closed also: 1.4
|
|
# doesn't ensure you get the same object when requesting the same
|
|
# cache. The global default is a separate Python object from the cache
|
|
# you get with get_cache("default"), so it will have its own connection
|
|
# that needs to be closed.
|
|
cache = django_cache.cache
|
|
if hasattr(cache, 'close'):
|
|
cache.close()
|
|
|
|
|
|
def post_fork(_server, _worker):
|
|
close_all_caches()
|