Merge branch 'master' into marlonkeating/upgrade-edx-enterprise-e2a4b9e
This commit is contained in:
@@ -1,4 +1,3 @@
|
||||
#######
|
||||
Testing
|
||||
#######
|
||||
|
||||
@@ -7,7 +6,7 @@ Testing
|
||||
:depth: 3
|
||||
|
||||
Overview
|
||||
========
|
||||
********
|
||||
|
||||
We maintain two kinds of tests: unit tests and integration tests.
|
||||
|
||||
@@ -26,10 +25,10 @@ tests. Most of our tests are unit tests or
|
||||
integration tests.
|
||||
|
||||
Test Types
|
||||
----------
|
||||
==========
|
||||
|
||||
Unit Tests
|
||||
~~~~~~~~~~
|
||||
----------
|
||||
|
||||
- Each test case should be concise: setup, execute, check, and
|
||||
teardown. If you find yourself writing tests with many steps,
|
||||
@@ -38,18 +37,18 @@ Unit Tests
|
||||
|
||||
- As a rule of thumb, your unit tests should cover every code branch.
|
||||
|
||||
- Mock or patch external dependencies. We use the voidspace `Mock Library`_.
|
||||
- Mock or patch external dependencies using `unittest.mock`_ functions.
|
||||
|
||||
- We unit test Python code (using `unittest`_) and Javascript (using
|
||||
`Jasmine`_)
|
||||
|
||||
.. _Mock Library: http://www.voidspace.org.uk/python/mock/
|
||||
.. _unittest.mock: https://docs.python.org/3/library/unittest.mock.html
|
||||
.. _unittest: http://docs.python.org/2/library/unittest.html
|
||||
.. _Jasmine: http://jasmine.github.io/
|
||||
|
||||
|
||||
Integration Tests
|
||||
~~~~~~~~~~~~~~~~~
|
||||
-----------------
|
||||
|
||||
- Test several units at the same time. Note that you can still mock or patch
|
||||
dependencies that are not under test! For example, you might test that
|
||||
@@ -67,7 +66,7 @@ Integration Tests
|
||||
.. _Django test client: https://docs.djangoproject.com/en/dev/topics/testing/overview/
|
||||
|
||||
Test Locations
|
||||
--------------
|
||||
==============
|
||||
|
||||
- Python unit and integration tests: Located in subpackages called
|
||||
``tests``. For example, the tests for the ``capa`` package are
|
||||
@@ -80,14 +79,29 @@ Test Locations
|
||||
the test for ``src/views/module.js`` should be written in
|
||||
``spec/views/module_spec.js``.
|
||||
|
||||
Running Tests
|
||||
=============
|
||||
Factories
|
||||
=========
|
||||
|
||||
**Unless otherwise mentioned, all the following commands should be run from inside the lms docker container.**
|
||||
Many tests delegate set-up to a "factory" class. For example, there are
|
||||
factories for creating courses, problems, and users. This encapsulates
|
||||
set-up logic from tests.
|
||||
|
||||
Factories are often implemented using `FactoryBoy`_.
|
||||
|
||||
In general, factories should be located close to the code they use. For
|
||||
example, the factory for creating problem XML definitions is located in
|
||||
``xmodule/capa/tests/response_xml_factory.py`` because the
|
||||
``capa`` package handles problem XML.
|
||||
|
||||
.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/
|
||||
|
||||
Running Python Unit tests
|
||||
-------------------------
|
||||
*************************
|
||||
|
||||
The following commands need to be run within a Python environment in
|
||||
which requirements/edx/testing.txt has been installed. If you are using a
|
||||
Docker-based Open edX distribution, then you probably will want to run these
|
||||
commands within the LMS and/or CMS Docker containers.
|
||||
|
||||
We use `pytest`_ to run Python tests. Pytest is a testing framework for python and should be your goto for local Python unit testing.
|
||||
|
||||
@@ -97,16 +111,16 @@ Pytest (and all of the plugins we use with it) has a lot of options. Use `pytest
|
||||
|
||||
|
||||
Running Python Test Subsets
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
===========================
|
||||
|
||||
When developing tests, it is often helpful to be able to really just run one single test without the overhead of PIP installs, UX builds, etc.
|
||||
|
||||
Various ways to run tests using pytest::
|
||||
|
||||
pytest path/test_module.py # Run all tests in a module.
|
||||
pytest path/test_module.py::test_func # Run a specific test within a module.
|
||||
pytest path/test_module.py::TestClass # Run all tests in a class
|
||||
pytest path/test_module.py::TestClass::test_method # Run a specific method of a class.
|
||||
pytest path/test_module.py # Run all tests in a module.
|
||||
pytest path/test_module.py::test_func # Run a specific test within a module.
|
||||
pytest path/test_module.py::TestClass # Run all tests in a class
|
||||
pytest path/test_module.py::TestClass::test_method # Run a specific method of a class.
|
||||
pytest path/testing/ # Run all tests in a directory.
|
||||
|
||||
For example, this command runs a single python unit test file::
|
||||
@@ -114,7 +128,7 @@ For example, this command runs a single python unit test file::
|
||||
pytest xmodule/tests/test_stringify.py
|
||||
|
||||
Note -
|
||||
edx-platorm has multiple services (lms, cms) in it. The environment for each service is different enough that we run some tests in both environments in Github Actions.
|
||||
edx-platorm has multiple services (lms, cms) in it. The environment for each service is different enough that we run some tests in both environments in Github Actions.
|
||||
To test in each of these environments (especially for tests in "common" and "xmodule" directories), you will need to test in each seperately.
|
||||
To specify that the tests are run with the relevant service as root, Add --rootdir flag at end of your pytest call and specify the env to test in::
|
||||
|
||||
@@ -139,7 +153,7 @@ Various tools like ddt create tests with very complex names, rather than figurin
|
||||
pytest xmodule/tests/test_stringify.py --collectonly
|
||||
|
||||
Testing with migrations
|
||||
***********************
|
||||
=======================
|
||||
|
||||
For the sake of speed, by default the python unit test database tables
|
||||
are created directly from apps' models. If you want to run the tests
|
||||
@@ -149,7 +163,7 @@ against a database created by applying the migrations instead, use the
|
||||
pytest test --create-db --migrations
|
||||
|
||||
Debugging a test
|
||||
~~~~~~~~~~~~~~~~
|
||||
================
|
||||
|
||||
There are various ways to debug tests in Python and more specifically with pytest:
|
||||
|
||||
@@ -173,7 +187,7 @@ There are various ways to debug tests in Python and more specifically with pytes
|
||||
|
||||
|
||||
How to output coverage locally
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
==============================
|
||||
|
||||
These are examples of how to run a single test and get coverage::
|
||||
|
||||
@@ -220,234 +234,84 @@ run one of these commands::
|
||||
.. _YouTube stub server: https://github.com/openedx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
|
||||
|
||||
|
||||
Debugging Unittest Flakiness
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Handling flaky unit tests
|
||||
=========================
|
||||
|
||||
As we move over to running our unittests with Jenkins Pipelines and pytest-xdist,
|
||||
there are new ways for tests to flake, which can sometimes be difficult to debug.
|
||||
If you run into flakiness, check (and feel free to contribute to) this
|
||||
`confluence document <https://openedx.atlassian.net/wiki/spaces/TE/pages/884998163/Debugging+test+failures+with+pytest-xdist>`__ for help.
|
||||
See this `confluence document <https://openedx.atlassian.net/wiki/spaces/AC/pages/4306337795/Flaky+Test+Process>`_.
|
||||
|
||||
Running Javascript Unit Tests
|
||||
-----------------------------
|
||||
|
||||
Before running Javascript unit tests, you will need to be running Firefox or Chrome in a place visible to edx-platform. If running this in devstack, you can run ``make dev.up.firefox`` or ``make dev.up.chrome``. Firefox is the default browser for the tests, so if you decide to use Chrome, you will need to prefix the test command with ``SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome`` (if using devstack).
|
||||
Running JavaScript Unit Tests
|
||||
*****************************
|
||||
|
||||
We use Jasmine to run JavaScript unit tests. To run all the JavaScript
|
||||
tests::
|
||||
Before running Javascript unit tests, you will need to be running Firefox or Chrome in a place visible to edx-platform.
|
||||
If you are using Tutor Dev to run edx-platform, then you can do so by installing and enabling the
|
||||
``test-legacy-js`` plugin from `openedx-tutor-plugins`_, and then rebuilding
|
||||
the ``openedx-dev`` image::
|
||||
|
||||
paver test_js
|
||||
tutor plugins install https://github.com/openedx/openedx-tutor-plugins/tree/main/plugins/tutor-contrib-test-legacy-js
|
||||
tutor plugins enable test-legacy-js
|
||||
tutor images build openedx-dev
|
||||
|
||||
To run a specific set of JavaScript tests and print the results to the
|
||||
console, run these commands::
|
||||
.. _openedx-tutor-plugins: https://github.com/openedx/openedx-tutor-plugins/
|
||||
|
||||
paver test_js_run -s lms
|
||||
paver test_js_run -s cms
|
||||
paver test_js_run -s cms-squire
|
||||
paver test_js_run -s xmodule
|
||||
paver test_js_run -s xmodule-webpack
|
||||
paver test_js_run -s common
|
||||
paver test_js_run -s common-requirejs
|
||||
We use Jasmine (via Karma) to run most JavaScript unit tests. We use Jest to
|
||||
run a small handful of additional JS unit tests. You can use the ``npm run
|
||||
test*`` commands to run them::
|
||||
|
||||
To run JavaScript tests in a browser, run these commands::
|
||||
npm run test-karma # Run all Jasmine+Karma tests.
|
||||
npm run test-jest # Run all Jest tests.
|
||||
npm run test # Run both of the above.
|
||||
|
||||
paver test_js_dev -s lms
|
||||
paver test_js_dev -s cms
|
||||
paver test_js_dev -s cms-squire
|
||||
paver test_js_dev -s xmodule
|
||||
paver test_js_dev -s xmodule-webpack
|
||||
paver test_js_dev -s common
|
||||
paver test_js_dev -s common-requirejs
|
||||
The Karma tests are further broken down into three types depending on how the
|
||||
JavaScript it is testing is built::
|
||||
|
||||
Debugging Specific Javascript Tests
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
npm run test-karma-vanilla # Our very oldest JS, which doesn't even use RequireJS
|
||||
npm run test-karma-require # Old JS that uses RequireJS
|
||||
npm run test-karma-webpack # Slightly "newer" JS which is built with Webpack
|
||||
|
||||
The best way to debug individual tests is to run the test suite in the browser and
|
||||
use your browser's Javascript debugger. The debug page will allow you to select
|
||||
an individual test and only view the results of that test.
|
||||
Unfortunately, at the time of writing, the build for the ``test-karma-webpack``
|
||||
tests is broken. The tests are excluded from ``npm run test-karma`` as to not
|
||||
fail CI. We `may fix this one day`_.
|
||||
|
||||
.. _may fix this one day: https://github.com/openedx/edx-platform/issues/35956
|
||||
|
||||
Debugging Tests in a Browser
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
To run all Karma+Jasmine tests for a particular top-level edx-platform folder,
|
||||
you can run::
|
||||
|
||||
To debug these tests on devstack in a local browser:
|
||||
npm run test-cms
|
||||
npm run test-lms
|
||||
npm run test-xmodule
|
||||
npm run test-common
|
||||
|
||||
* first run the appropriate test_js_dev command from above
|
||||
* open http://localhost:19876/debug.html in your host system's browser of choice
|
||||
* this will run all the tests and show you the results including details of any failures
|
||||
* you can click on an individually failing test and/or suite to re-run it by itself
|
||||
* you can now use the browser's developer tools to debug as you would any other JavaScript code
|
||||
Finally, if you want to pass any options to the underlying ``node`` invocation
|
||||
for Karma+Jasmine tests, you can run one of these specific commands, and put
|
||||
your arguments after the ``--`` separator::
|
||||
|
||||
Note: the port is also output to the console that you ran the tests from if you find that easier.
|
||||
npm run test-cms-vanilla -- --your --args --here
|
||||
npm run test-cms-require -- --your --args --here
|
||||
npm run test-cms-webpack -- --your --args --here
|
||||
npm run test-lms-webpack -- --your --args --here
|
||||
npm run test-xmodule-vanilla -- --your --args --here
|
||||
npm run test-xmodule-webpack -- --your --args --here
|
||||
npm run test-common-vanilla -- --your --args --here
|
||||
npm run test-common-require -- --your --args --here
|
||||
|
||||
These paver commands call through to Karma. For more
|
||||
info, see `karma-runner.github.io <https://karma-runner.github.io/>`__.
|
||||
|
||||
Testing internationalization with dummy translations
|
||||
----------------------------------------------------
|
||||
Code Quality
|
||||
************
|
||||
|
||||
Any text you add to the platform should be internationalized. To generate translations for your new strings, run the following command::
|
||||
We use several tools to analyze code quality. The full set of them is::
|
||||
|
||||
paver i18n_dummy
|
||||
mypy $PATHS...
|
||||
pycodestyle $PATHS...
|
||||
pylint $PATHS...
|
||||
lint-imports
|
||||
scripts/verify-dunder-init.sh
|
||||
make xsslint
|
||||
make pii_check
|
||||
make check_keywords
|
||||
npm run lint
|
||||
|
||||
This command generates dummy translations for each dummy language in the
|
||||
platform and puts the dummy strings in the appropriate language files.
|
||||
You can then preview the dummy languages on your local machine and also in your sandbox, if and when you create one.
|
||||
Where ``$PATHS...`` is a list of folders and files to analyze, or nothing if
|
||||
you would like to analyze the entire codebase (which can take a while).
|
||||
|
||||
The dummy language files that are generated during this process can be
|
||||
found in the following locations::
|
||||
|
||||
conf/locale/{LANG_CODE}
|
||||
|
||||
There are a few JavaScript files that are generated from this process. You can find those in the following locations::
|
||||
|
||||
lms/static/js/i18n/{LANG_CODE}
|
||||
cms/static/js/i18n/{LANG_CODE}
|
||||
|
||||
Do not commit the ``.po``, ``.mo``, ``.js`` files that are generated
|
||||
in the above locations during the dummy translation process!
|
||||
|
||||
Test Coverage and Quality
|
||||
-------------------------
|
||||
|
||||
Viewing Test Coverage
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We currently collect test coverage information for Python
|
||||
unit/integration tests.
|
||||
|
||||
To view test coverage:
|
||||
|
||||
1. Run the test suite with this command::
|
||||
|
||||
paver test
|
||||
|
||||
2. Generate reports with this command::
|
||||
|
||||
paver coverage
|
||||
|
||||
3. Reports are located in the ``reports`` folder. The command generates
|
||||
HTML and XML (Cobertura format) reports.
|
||||
|
||||
Python Code Style Quality
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To view Python code style quality (including PEP 8 and pylint violations) run this command::
|
||||
|
||||
paver run_quality
|
||||
|
||||
More specific options are below.
|
||||
|
||||
- These commands run a particular quality report::
|
||||
|
||||
paver run_pep8
|
||||
paver run_pylint
|
||||
|
||||
- This command runs a report, and sets it to fail if it exceeds a given number
|
||||
of violations::
|
||||
|
||||
paver run_pep8 --limit=800
|
||||
|
||||
- The ``run_quality`` uses the underlying diff-quality tool (which is packaged
|
||||
with `diff-cover`_). With that, the command can be set to fail if a certain
|
||||
diff threshold is not met. For example, to cause the process to fail if
|
||||
quality expectations are less than 100% when compared to master (or in other
|
||||
words, if style quality is worse than what is already on master)::
|
||||
|
||||
paver run_quality --percentage=100
|
||||
|
||||
- Note that 'fixme' violations are not counted with run\_quality. To
|
||||
see all 'TODO' lines, use this command::
|
||||
|
||||
paver find_fixme --system=lms
|
||||
|
||||
``system`` is an optional argument here. It defaults to
|
||||
``cms,lms,common``.
|
||||
|
||||
.. _diff-cover: https://github.com/Bachmann1234/diff-cover
|
||||
|
||||
|
||||
JavaScript Code Style Quality
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To view JavaScript code style quality run this command::
|
||||
|
||||
paver run_eslint
|
||||
|
||||
- This command also comes with a ``--limit`` switch, this is an example of that switch::
|
||||
|
||||
paver run_eslint --limit=50000
|
||||
|
||||
|
||||
Code Complexity Tools
|
||||
=====================
|
||||
|
||||
Tool(s) available for evaluating complexity of edx-platform code:
|
||||
|
||||
|
||||
- `plato <https://github.com/es-analysis/plato>`__ for JavaScript code
|
||||
complexity. Several options are available on the command line; see
|
||||
documentation. Below, the following command will produce an HTML report in a
|
||||
subdirectory called "jscomplexity"::
|
||||
|
||||
plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/
|
||||
|
||||
Other Testing Tips
|
||||
==================
|
||||
|
||||
Connecting to Browser
|
||||
---------------------
|
||||
|
||||
If you want to see the browser being automated for JavaScript,
|
||||
you can connect to the container running it via VNC.
|
||||
|
||||
+------------------------+----------------------+
|
||||
| Browser | VNC connection |
|
||||
+========================+======================+
|
||||
| Firefox (Default) | vnc://0.0.0.0:25900 |
|
||||
+------------------------+----------------------+
|
||||
| Chrome (via Selenium) | vnc://0.0.0.0:15900 |
|
||||
+------------------------+----------------------+
|
||||
|
||||
On macOS, enter the VNC connection string in Safari to connect via VNC. The VNC
|
||||
passwords for both browsers are randomly generated and logged at container
|
||||
startup, and can be found by running ``make vnc-passwords``.
|
||||
|
||||
Most tests are run in Firefox by default. To use Chrome for tests that normally
|
||||
use Firefox instead, prefix the test command with
|
||||
``SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome``
|
||||
|
||||
Factories
|
||||
---------
|
||||
|
||||
Many tests delegate set-up to a "factory" class. For example, there are
|
||||
factories for creating courses, problems, and users. This encapsulates
|
||||
set-up logic from tests.
|
||||
|
||||
Factories are often implemented using `FactoryBoy`_.
|
||||
|
||||
In general, factories should be located close to the code they use. For
|
||||
example, the factory for creating problem XML definitions is located in
|
||||
``xmodule/capa/tests/response_xml_factory.py`` because the
|
||||
``capa`` package handles problem XML.
|
||||
|
||||
.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/
|
||||
|
||||
Running Tests on Paver Scripts
|
||||
------------------------------
|
||||
|
||||
To run tests on the scripts that power the various Paver commands, use the following command::
|
||||
|
||||
pytest pavelib
|
||||
|
||||
Testing using queue servers
|
||||
---------------------------
|
||||
|
||||
When testing problems that use a queue server on AWS (e.g.
|
||||
sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so::
|
||||
|
||||
./manage.py lms runserver 0.0.0.0:8000
|
||||
|
||||
When you connect to the LMS, you need to use the public ip. Use
|
||||
``ifconfig`` to figure out the number, and connect e.g. to
|
||||
``http://18.3.4.5:8000/``
|
||||
|
||||
@@ -1,6 +1,22 @@
|
||||
#######################################
|
||||
edx-platform Static Asset Pipeline Plan
|
||||
#######################################
|
||||
0. edx-platform Static Asset Pipeline Plan
|
||||
##########################################
|
||||
|
||||
Status
|
||||
******
|
||||
|
||||
Accepted ~2017
|
||||
Partially adopted 2017-2024
|
||||
|
||||
This was an old plan for modernizing Open edX's frontend assets. We've
|
||||
retroactively turned it into an ADR because it has some valuable insights.
|
||||
Although most of these improvements weren't applied as written, these ideas
|
||||
(particularly, separating Python concerns from frontend tooling concerns) were
|
||||
applied to both legacy edx-platform assets as well as the Micro-Frontend
|
||||
framework that was developed 2017-2019.
|
||||
|
||||
Context, Decision, Consequences
|
||||
*******************************
|
||||
|
||||
|
||||
Static asset handling in edx-platform has evolved in a messy way over the years.
|
||||
This has led to a lot of complexity and inconsistencies. This is a proposal for
|
||||
@@ -9,20 +25,9 @@ this is not a detailed guide for how to write React or Bootstrap code. This is
|
||||
instead going to talk about conventions for how we arrange, extract, and compile
|
||||
static assets.
|
||||
|
||||
Big Open Questions (TODO)
|
||||
*************************
|
||||
|
||||
This document is a work in progress, as the design for some of this is still in
|
||||
flux, particularly around extensibility.
|
||||
|
||||
* Pluggable third party apps and Webpack packaging.
|
||||
* Keep the Django i18n mechanism?
|
||||
* Stance on HTTP/2 and bundling granularity.
|
||||
* Optimizing theme assets.
|
||||
* Tests
|
||||
|
||||
Requirements
|
||||
************
|
||||
============
|
||||
|
||||
Any proposed solution must support:
|
||||
|
||||
@@ -35,7 +40,7 @@ Any proposed solution must support:
|
||||
* Other kinds of pluggability???
|
||||
|
||||
Assumptions
|
||||
***********
|
||||
===========
|
||||
|
||||
Some assumptions/opinions that this proposal is based on:
|
||||
|
||||
@@ -54,8 +59,8 @@ Some assumptions/opinions that this proposal is based on:
|
||||
* It should be possible to pre-build static assets and deploy them onto S3 or
|
||||
similar.
|
||||
|
||||
Where We Are Today
|
||||
******************
|
||||
Where We Are Today (2017)
|
||||
=========================
|
||||
|
||||
We have a static asset pipeline that is mostly driven by Django's built-in
|
||||
staticfiles finders and the collectstatic process. We use the popular
|
||||
@@ -95,9 +100,9 @@ places (typically ``/edx/var/edxapp/staticfiles`` for LMS and
|
||||
``/edx/var/edxapp/staticfiles/studio`` for Studio) and can be collected
|
||||
separately. However in practice they're always run together because we deploy
|
||||
them from the same commits and to the same servers.
|
||||
|
||||
|
||||
Django vs. Webpack Conventions
|
||||
******************************
|
||||
==============================
|
||||
|
||||
The Django convention for having an app with bundled assets is to namespace them
|
||||
locally with the app name so that they get their own directories when they are
|
||||
@@ -112,7 +117,7 @@ the root of edx-platform, which would specify all bundles in the project.
|
||||
TODO: The big, "pluggable Webpack components" question.
|
||||
|
||||
Proposed Repo Structure
|
||||
***********************
|
||||
=======================
|
||||
|
||||
All assets that are in common spaces like ``common/static``, ``lms/static``,
|
||||
and ``cms/static`` would be moved to be under the Django apps that they are a
|
||||
@@ -122,7 +127,7 @@ part of and follow the Django naming convention (e.g.
|
||||
any client-side templates will be put in ``static/{appname}/templates``.
|
||||
|
||||
Proposed Compiled Structure
|
||||
***************************
|
||||
===========================
|
||||
|
||||
This is meant to be a sample of the different types of things we'd have, not a
|
||||
full list:
|
||||
@@ -150,14 +155,14 @@ full list:
|
||||
/theming/themes/open-edx
|
||||
/red-theme
|
||||
/edx.org
|
||||
|
||||
|
||||
# XBlocks still collect their assets into a common space (/xmodule goes away)
|
||||
# We consider this to be the XBlock Runtime's app, and it collects static
|
||||
# assets from installed XBlocks.
|
||||
/xblock
|
||||
|
||||
Django vs. Webpack Roles
|
||||
************************
|
||||
========================
|
||||
|
||||
Rule of thumb: Django/Python still serves static assets, Webpack processes and
|
||||
optimizes them.
|
||||
35
package.json
35
package.json
@@ -14,24 +14,25 @@
|
||||
"watch-webpack": "npm run webpack-dev -- --watch",
|
||||
"watch-sass": "scripts/watch_sass.sh",
|
||||
"lint": "python scripts/eslint.py",
|
||||
"test": "npm run test-cms && npm run test-lms && npm run test-xmodule && npm run test-common && npm run test-jest",
|
||||
"test-kind-vanilla": "npm run test-cms-vanilla && npm run test-xmodule-vanilla && npm run test-common-vanilla",
|
||||
"test-kind-require": "npm run test-cms-require && npm run test-common-require",
|
||||
"test-kind-webpack": "npm run test-cms-webpack && npm run test-lms-webpack && npm run test-xmodule-webpack",
|
||||
"test-cms": "npm run test-cms-vanilla && npm run test-cms-require",
|
||||
"test-cms-vanilla": "npm run test-suite -- cms/static/karma_cms.conf.js",
|
||||
"test-cms-require": "npm run test-suite -- cms/static/karma_cms_squire.conf.js",
|
||||
"test-cms-webpack": "npm run test-suite -- cms/static/karma_cms_webpack.conf.js",
|
||||
"test-lms": "echo 'WARNING: Webpack JS tests are disabled. No LMS JS tests will be run. See https://github.com/openedx/edx-platform/issues/35956 for details.'",
|
||||
"test-lms-webpack": "npm run test-suite -- lms/static/karma_lms.conf.js",
|
||||
"test-xmodule": "npm run test-xmodule-vanilla",
|
||||
"test-xmodule-vanilla": "npm run test-suite -- xmodule/js/karma_xmodule.conf.js",
|
||||
"test-xmodule-webpack": "npm run test-suite -- xmodule/js/karma_xmodule_webpack.conf.js",
|
||||
"test": "npm run test-jest && npm run test-karma",
|
||||
"test-jest": "jest",
|
||||
"test-karma": "npm run test-karma-vanilla && npm run test-karma-require && echo 'WARNING: Skipped broken webpack tests. For details, see: https://github.com/openedx/edx-platform/issues/35956'",
|
||||
"test-karma-vanilla": "npm run test-cms-vanilla && npm run test-xmodule-vanilla && npm run test-common-vanilla",
|
||||
"test-karma-require": "npm run test-cms-require && npm run test-common-require",
|
||||
"test-karma-webpack": "npm run test-cms-webpack && npm run test-lms-webpack && npm run test-xmodule-webpack",
|
||||
"test-karma-conf": "${NODE_WRAPPER:-xvfb-run --auto-servernum} node --max_old_space_size=4096 node_modules/.bin/karma start --single-run=true --capture-timeout=60000 --browsers=FirefoxNoUpdates",
|
||||
"test-cms": "npm run test-cms-vanilla && npm run test-cms-require && npm run test-cms-webpack",
|
||||
"test-cms-vanilla": "npm run test-karma-conf -- cms/static/karma_cms.conf.js",
|
||||
"test-cms-require": "npm run test-karma-conf -- cms/static/karma_cms_squire.conf.js",
|
||||
"test-cms-webpack": "npm run test-karma-conf -- cms/static/karma_cms_webpack.conf.js",
|
||||
"test-lms": "npm run test-jest && npm run test-lms-webpack",
|
||||
"test-lms-webpack": "npm run test-karma-conf -- lms/static/karma_lms.conf.js",
|
||||
"test-xmodule": "npm run test-xmodule-vanilla && npm run test-xmodule-webpack",
|
||||
"test-xmodule-vanilla": "npm run test-karma-conf -- xmodule/js/karma_xmodule.conf.js",
|
||||
"test-xmodule-webpack": "npm run test-karma-conf -- xmodule/js/karma_xmodule_webpack.conf.js",
|
||||
"test-common": "npm run test-common-vanilla && npm run test-common-require",
|
||||
"test-common-vanilla": "npm run test-suite -- common/static/karma_common.conf.js",
|
||||
"test-common-require": "npm run test-suite -- common/static/karma_common_requirejs.conf.js",
|
||||
"test-suite": "${NODE_WRAPPER:-xvfb-run --auto-servernum} node --max_old_space_size=4096 node_modules/.bin/karma start --single-run=true --capture-timeout=60000 --browsers=FirefoxNoUpdates",
|
||||
"test-jest": "jest"
|
||||
"test-common-vanilla": "npm run test-karma-conf -- common/static/karma_common.conf.js",
|
||||
"test-common-require": "npm run test-karma-conf -- common/static/karma_common_requirejs.conf.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@babel/core": "7.26.0",
|
||||
|
||||
Reference in New Issue
Block a user