diff --git a/AUTHORS b/AUTHORS index 154b0c9b98..7d6397629f 100644 --- a/AUTHORS +++ b/AUTHORS @@ -75,3 +75,4 @@ Frances Botsford Jonah Stanley Slater Victoroff Peter Fogg +Renzo Lucioni \ No newline at end of file diff --git a/CHANGELOG.rst b/CHANGELOG.rst new file mode 100644 index 0000000000..bbaf3f3a6b --- /dev/null +++ b/CHANGELOG.rst @@ -0,0 +1,117 @@ +Change Log +---------- + +These are notable changes in edx-platform. This is a rolling list of changes, +in roughly chronological order, most recent first. Add your entries at or near +the top. Include a label indicating the component affected. + +LMS: Problem rescoring. Added options on the Grades tab of the +Instructor Dashboard to allow all students' submissions for a +particular problem to be rescored. Also supports resetting all +students' number of attempts to zero. Provides a list of background +tasks that are currently running for the course, and an option to +see a history of background tasks for a given problem. + +LMS: Forums. Added handling for case where discussion module can get `None` as +value of lms.start in `lms/djangoapps/django_comment_client/utils.py` + +Studio, LMS: Make ModelTypes more strict about their expected content (for +instance, Boolean, Integer, String), but also allow them to hold either the +typed value, or a String that can be converted to their typed value. For example, +an Integer can contain 3 or '3'. This changed an update to the xblock library. + +LMS: Courses whose id matches a regex in the COURSES_WITH_UNSAFE_CODE Django +setting now run entirely outside the Python sandbox. + +Blades: Added tests for Video Alpha player. + +Blades: Video Alpha bug fix for speed changing to 1.0 in Firefox. + +Blades: Additional event tracking added to Video Alpha: fullscreen switch, show/hide +captions. + +CMS: Allow editors to delete uploaded files/assets + +XModules: `XModuleDescriptor.__init__` and `XModule.__init__` dropped the +`location` parameter (and added it as a field), and renamed `system` to `runtime`, +to accord more closely to `XBlock.__init__` + +LMS: Some errors handling Non-ASCII data in XML courses have been fixed. + +LMS: Add page-load tracking using segment-io (if SEGMENT_IO_LMS_KEY and +SEGMENT_IO_LMS feature flag is on) + +Blades: Simplify calc.py (which is used for the Numerical/Formula responses); add trig/other functions. + +LMS: Background colors on login, register, and courseware have been corrected +back to white. + +LMS: Accessibility improvements have been made to several courseware and +navigation elements. + +LMS: Small design/presentation changes to login and register views. + +LMS: Functionality added to instructor enrollment tab in LMS such that invited +student can be auto-enrolled in course or when activating if not current +student. + +Blades: Staff debug info is now accessible for Graphical Slider Tool problems. + +Blades: For Video Alpha the events ready, play, pause, seek, and speed change +are logged on the server (in the logs). + +Common: all dates and times are not time zone aware datetimes. No code should create or use struct_times nor naive +datetimes. + +Common: Developers can now have private Django settings files. + +Common: Safety code added to prevent anything above the vertical level in the +course tree from being marked as version='draft'. It will raise an exception if +the code tries to so mark a node. We need the backtraces to figure out where +this very infrequent intermittent marking was occurring. It was making courses +look different in Studio than in LMS. + +Deploy: MKTG_URLS is now read from env.json. + +Common: Theming makes it possible to change the look of the site, from +Stanford. + +Common: Accessibility UI fixes. + +Common: The "duplicate email" error message is more informative. + +Studio: Component metadata settings editor. + +Studio: Autoplay for Video Alpha is disabled (only in Studio). + +Studio: Single-click creation for video and discussion components. + +Studio: fixed a bad link in the activation page. + +LMS: Changed the help button text. + +LMS: Fixed failing numeric response (decimal but no trailing digits). + +LMS: XML Error module no longer shows students a stack trace. + +Blades: Videoalpha. + +XModules: Added partial credit for foldit module. + +XModules: Added "randomize" XModule to list of XModule types. + +XModules: Show errors with full descriptors. + +XQueue: Fixed (hopefully) worker crash when the connection to RabbitMQ is +dropped suddenly. + +XQueue: Upload file submissions to a specially named bucket in S3. + +Common: Removed request debugger. + +Common: Updated Django to version 1.4.5. + +Common: Updated CodeJail. + +Common: Allow setting of authentication session cookie name. + diff --git a/README.md b/README.md index 3a6236ea70..92a4116354 100644 --- a/README.md +++ b/README.md @@ -115,7 +115,7 @@ CMS templates. Fortunately, `rake` will do all of this for you! Just run: If you are running these commands using the [`zsh`](http://www.zsh.org/) shell, zsh will assume that you are doing -[shell globbing](https://en.wikipedia.org/wiki/Glob_(programming)), search for +[shell globbing](https://en.wikipedia.org/wiki/Glob_%28programming%29), search for a file in your directory named `django-adminsyncdb` or `django-adminmigrate`, and fail. To fix this, just surround the argument with quotation marks, so that you're running `rake "django-admin[syncdb]"`. diff --git a/cms/CHANGELOG.md b/cms/CHANGELOG.md deleted file mode 100644 index d21d08d23c..0000000000 --- a/cms/CHANGELOG.md +++ /dev/null @@ -1,21 +0,0 @@ -Instructions -============ -For each pull request, add one or more lines to the bottom of the change list. When -code is released to production, change the `Upcoming` entry to todays date, and add -a new block at the bottom of the file. - - Upcoming - -------- - -Change log entries should be targeted at end users. A good place to start is the -user story that instigated the pull request. - - -Changes -======= - -Upcoming --------- -* Fix: Deleting last component in a unit does not work -* Fix: Unit name is editable when a unit is public -* Fix: Visual feedback inconsistent when saving a unit name change diff --git a/cms/djangoapps/contentstore/features/advanced-settings.feature b/cms/djangoapps/contentstore/features/advanced-settings.feature index 558294e890..13600f2086 100644 --- a/cms/djangoapps/contentstore/features/advanced-settings.feature +++ b/cms/djangoapps/contentstore/features/advanced-settings.feature @@ -28,11 +28,18 @@ Feature: Advanced (manual) course policy Scenario: Test how multi-line input appears Given I am on the Advanced Course Settings page in Studio - When I create a JSON object as a value + When I create a JSON object as a value for "discussion_topics" Then it is displayed as formatted And I reload the page Then it is displayed as formatted + Scenario: Test error if value supplied is of the wrong type + Given I am on the Advanced Course Settings page in Studio + When I create a JSON object as a value for "display_name" + Then I get an error on save + And I reload the page + Then the policy key value is unchanged + Scenario: Test automatic quoting of non-JSON values Given I am on the Advanced Course Settings page in Studio When I create a non-JSON value not in quotes diff --git a/cms/djangoapps/contentstore/features/advanced-settings.py b/cms/djangoapps/contentstore/features/advanced-settings.py index eb00c06ba9..4995f3505d 100644 --- a/cms/djangoapps/contentstore/features/advanced-settings.py +++ b/cms/djangoapps/contentstore/features/advanced-settings.py @@ -2,13 +2,8 @@ #pylint: disable=W0621 from lettuce import world, step -from common import * -from nose.tools import assert_false, assert_equal - -""" -http://selenium.googlecode.com/svn/trunk/docs/api/py/webdriver/selenium.webdriver.common.keys.html -""" -from selenium.webdriver.common.keys import Keys +from nose.tools import assert_false, assert_equal, assert_regexp_matches +from common import type_in_codemirror KEY_CSS = '.key input.policy-key' VALUE_CSS = 'textarea.json' @@ -38,13 +33,7 @@ def press_the_notification_button(step, name): @step(u'I edit the value of a policy key$') def edit_the_value_of_a_policy_key(step): - """ - It is hard to figure out how to get into the CodeMirror - area, so cheat and do it from the policy key field :) - """ - world.css_find(".CodeMirror")[get_index_of(DISPLAY_NAME_KEY)].click() - g = world.css_find("div.CodeMirror.CodeMirror-focused > div > textarea") - g._element.send_keys(Keys.ARROW_LEFT, ' ', 'X') + type_in_codemirror(get_index_of(DISPLAY_NAME_KEY), 'X') @step(u'I edit the value of a policy key and save$') @@ -52,9 +41,9 @@ def edit_the_value_of_a_policy_key_and_save(step): change_display_name_value(step, '"foo"') -@step('I create a JSON object as a value$') -def create_JSON_object(step): - change_display_name_value(step, '{"key": "value", "key_2": "value_2"}') +@step('I create a JSON object as a value for "(.*)"$') +def create_JSON_object(step, key): + change_value(step, key, '{"key": "value", "key_2": "value_2"}') @step('I create a non-JSON value not in quotes$') @@ -82,7 +71,12 @@ def they_are_alphabetized(step): @step('it is displayed as formatted$') def it_is_formatted(step): - assert_policy_entries([DISPLAY_NAME_KEY], ['{\n "key": "value",\n "key_2": "value_2"\n}']) + assert_policy_entries(['discussion_topics'], ['{\n "key": "value",\n "key_2": "value_2"\n}']) + + +@step('I get an error on save$') +def error_on_save(step): + assert_regexp_matches(world.css_text('#notification-error-description'), 'Incorrect setting format') @step('it is displayed as a string') @@ -124,12 +118,9 @@ def get_display_name_value(): def change_display_name_value(step, new_value): + change_value(step, DISPLAY_NAME_KEY, new_value) - world.css_find(".CodeMirror")[get_index_of(DISPLAY_NAME_KEY)].click() - g = world.css_find("div.CodeMirror.CodeMirror-focused > div > textarea") - display_name = get_display_name_value() - for count in range(len(display_name)): - g._element.send_keys(Keys.END, Keys.BACK_SPACE) - # Must delete "" before typing the JSON value - g._element.send_keys(Keys.END, Keys.BACK_SPACE, Keys.BACK_SPACE, new_value) + +def change_value(step, key, new_value): + type_in_codemirror(get_index_of(key), new_value) press_the_notification_button(step, "Save") diff --git a/cms/djangoapps/contentstore/features/common.py b/cms/djangoapps/contentstore/features/common.py index 494192ad06..c28b35b1c2 100644 --- a/cms/djangoapps/contentstore/features/common.py +++ b/cms/djangoapps/contentstore/features/common.py @@ -169,3 +169,14 @@ def open_new_unit(step): step.given('I have added a new subsection') step.given('I expand the first section') world.css_click('a.new-unit-item') + + +def type_in_codemirror(index, text): + world.css_click(".CodeMirror", index=index) + g = world.css_find("div.CodeMirror.CodeMirror-focused > div > textarea") + if world.is_mac(): + g._element.send_keys(Keys.COMMAND + 'a') + else: + g._element.send_keys(Keys.CONTROL + 'a') + g._element.send_keys(Keys.DELETE) + g._element.send_keys(text) diff --git a/cms/djangoapps/contentstore/features/problem-editor.feature b/cms/djangoapps/contentstore/features/problem-editor.feature index bde350d8a3..cc1d766d2e 100644 --- a/cms/djangoapps/contentstore/features/problem-editor.feature +++ b/cms/djangoapps/contentstore/features/problem-editor.feature @@ -3,65 +3,71 @@ Feature: Problem Editor Scenario: User can view metadata Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I see five alphabetized settings and their expected values And Edit High Level Source is not visible Scenario: User can modify String values Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can modify the display name And my display name change is persisted on save Scenario: User can specify special characters in String values Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can specify special characters in the display name And my special characters and persisted on save Scenario: User can revert display name to unset Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can revert the display name to unset And my display name is unset on save Scenario: User can select values in a Select Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can select Per Student for Randomization And my change to randomization is persisted And I can revert to the default value for randomization Scenario: User can modify float input values Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can set the weight to "3.5" And my change to weight is persisted And I can revert to the default value of unset for weight Scenario: User cannot type letters in float number field Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then if I set the weight to "abc", it remains unset Scenario: User cannot type decimal values integer number field Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then if I set the max attempts to "2.34", it displays initially as "234", and is persisted as "234" Scenario: User cannot type out of range values in an integer number field Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then if I set the max attempts to "-3", it displays initially as "-3", and is persisted as "0" Scenario: Settings changes are not saved on Cancel Given I have created a Blank Common Problem - And I edit and select Settings + When I edit and select Settings Then I can set the weight to "3.5" And I can modify the display name Then If I press Cancel my changes are not persisted Scenario: Edit High Level source is available for LaTeX problem Given I have created a LaTeX Problem - And I edit and select Settings + When I edit and select Settings Then Edit High Level Source is visible + + Scenario: High Level source is persisted for LaTeX problem (bug STUD-280) + Given I have created a LaTeX Problem + When I edit and compile the High Level Source + Then my change to the High Level Source is persisted + And when I view the High Level Source I see my changes diff --git a/cms/djangoapps/contentstore/features/problem-editor.py b/cms/djangoapps/contentstore/features/problem-editor.py index 5dfcf55046..8691a6772e 100644 --- a/cms/djangoapps/contentstore/features/problem-editor.py +++ b/cms/djangoapps/contentstore/features/problem-editor.py @@ -3,6 +3,7 @@ from lettuce import world, step from nose.tools import assert_equal +from common import type_in_codemirror DISPLAY_NAME = "Display Name" MAXIMUM_ATTEMPTS = "Maximum Attempts" @@ -41,7 +42,9 @@ def i_see_five_settings_with_values(step): @step('I can modify the display name') def i_can_modify_the_display_name(step): - world.get_setting_entry(DISPLAY_NAME).find_by_css('.setting-input')[0].fill('modified') + # Verifying that the display name can be a string containing a floating point value + # (to confirm that we don't throw an error because it is of the wrong type). + world.get_setting_entry(DISPLAY_NAME).find_by_css('.setting-input')[0].fill('3.4') verify_modified_display_name() @@ -133,12 +136,12 @@ def set_the_max_attempts(step, max_attempts_set, max_attempts_displayed, max_att @step('Edit High Level Source is not visible') def edit_high_level_source_not_visible(step): - verify_high_level_source(step, False) + verify_high_level_source_links(step, False) @step('Edit High Level Source is visible') -def edit_high_level_source_visible(step): - verify_high_level_source(step, True) +def edit_high_level_source_links_visible(step): + verify_high_level_source_links(step, True) @step('If I press Cancel my changes are not persisted') @@ -151,13 +154,33 @@ def cancel_does_not_save_changes(step): @step('I have created a LaTeX Problem') def create_latex_problem(step): world.click_new_component_button(step, '.large-problem-icon') - # Go to advanced tab (waiting for the tab to be visible) - world.css_find('#ui-id-2') + # Go to advanced tab. world.css_click('#ui-id-2') world.click_component_from_menu("i4x://edx/templates/problem/Problem_Written_in_LaTeX", '.xmodule_CapaModule') -def verify_high_level_source(step, visible): +@step('I edit and compile the High Level Source') +def edit_latex_source(step): + open_high_level_source() + type_in_codemirror(1, "hi") + world.css_click('.hls-compile') + + +@step('my change to the High Level Source is persisted') +def high_level_source_persisted(step): + def verify_text(driver): + return world.css_find('.problem').text == 'hi' + + world.wait_for(verify_text) + + +@step('I view the High Level Source I see my changes') +def high_level_source_in_editor(step): + open_high_level_source() + assert_equal('hi', world.css_find('.source-edit-box').value) + + +def verify_high_level_source_links(step, visible): assert_equal(visible, world.is_css_present('.launch-latex-compiler')) world.cancel_component(step) assert_equal(visible, world.is_css_present('.upload-button')) @@ -172,7 +195,7 @@ def verify_modified_randomization(): def verify_modified_display_name(): - world.verify_setting_entry(world.get_setting_entry(DISPLAY_NAME), DISPLAY_NAME, 'modified', True) + world.verify_setting_entry(world.get_setting_entry(DISPLAY_NAME), DISPLAY_NAME, '3.4', True) def verify_modified_display_name_with_special_chars(): @@ -185,3 +208,8 @@ def verify_unset_display_name(): def set_weight(weight): world.get_setting_entry(PROBLEM_WEIGHT).find_by_css('.setting-input')[0].fill(weight) + + +def open_high_level_source(): + world.css_click('a.edit-button') + world.css_click('.launch-latex-compiler > a') diff --git a/cms/djangoapps/contentstore/features/section.py b/cms/djangoapps/contentstore/features/section.py index 4a628ff72b..9d63fa73c8 100644 --- a/cms/djangoapps/contentstore/features/section.py +++ b/cms/djangoapps/contentstore/features/section.py @@ -9,34 +9,34 @@ from nose.tools import assert_equal @step('I click the new section link$') -def i_click_new_section_link(step): +def i_click_new_section_link(_step): link_css = 'a.new-courseware-section-button' world.css_click(link_css) @step('I enter the section name and click save$') -def i_save_section_name(step): +def i_save_section_name(_step): save_section_name('My Section') @step('I enter a section name with a quote and click save$') -def i_save_section_name_with_quote(step): +def i_save_section_name_with_quote(_step): save_section_name('Section with "Quote"') @step('I have added a new section$') -def i_have_added_new_section(step): +def i_have_added_new_section(_step): add_section() @step('I click the Edit link for the release date$') -def i_click_the_edit_link_for_the_release_date(step): +def i_click_the_edit_link_for_the_release_date(_step): button_css = 'div.section-published-date a.edit-button' world.css_click(button_css) @step('I save a new section release date$') -def i_save_a_new_section_release_date(step): +def i_save_a_new_section_release_date(_step): set_date_and_time('input.start-date.date.hasDatepicker', '12/25/2013', 'input.start-time.time.ui-timepicker-input', '00:00') world.browser.click_link_by_text('Save') @@ -46,35 +46,35 @@ def i_save_a_new_section_release_date(step): @step('I see my section on the Courseware page$') -def i_see_my_section_on_the_courseware_page(step): +def i_see_my_section_on_the_courseware_page(_step): see_my_section_on_the_courseware_page('My Section') @step('I see my section name with a quote on the Courseware page$') -def i_see_my_section_name_with_quote_on_the_courseware_page(step): +def i_see_my_section_name_with_quote_on_the_courseware_page(_step): see_my_section_on_the_courseware_page('Section with "Quote"') @step('I click to edit the section name$') -def i_click_to_edit_section_name(step): +def i_click_to_edit_section_name(_step): world.css_click('span.section-name-span') @step('I see the complete section name with a quote in the editor$') -def i_see_complete_section_name_with_quote_in_editor(step): +def i_see_complete_section_name_with_quote_in_editor(_step): css = '.section-name-edit input[type=text]' assert world.is_css_present(css) assert_equal(world.browser.find_by_css(css).value, 'Section with "Quote"') @step('the section does not exist$') -def section_does_not_exist(step): +def section_does_not_exist(_step): css = 'h3[data-name="My Section"]' assert world.is_css_not_present(css) @step('I see a release date for my section$') -def i_see_a_release_date_for_my_section(step): +def i_see_a_release_date_for_my_section(_step): import re css = 'span.published-status' @@ -83,26 +83,32 @@ def i_see_a_release_date_for_my_section(step): # e.g. 11/06/2012 at 16:25 msg = 'Will Release:' - date_regex = '[01][0-9]\/[0-3][0-9]\/[12][0-9][0-9][0-9]' - time_regex = '[0-2][0-9]:[0-5][0-9]' - match_string = '%s %s at %s' % (msg, date_regex, time_regex) + date_regex = r'(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) \d\d?, \d{4}' + if not re.search(date_regex, status_text): + print status_text, date_regex + time_regex = r'[0-2]\d:[0-5]\d( \w{3})?' + if not re.search(time_regex, status_text): + print status_text, time_regex + match_string = r'%s\s+%s at %s' % (msg, date_regex, time_regex) + if not re.match(match_string, status_text): + print status_text, match_string assert re.match(match_string, status_text) @step('I see a link to create a new subsection$') -def i_see_a_link_to_create_a_new_subsection(step): +def i_see_a_link_to_create_a_new_subsection(_step): css = 'a.new-subsection-item' assert world.is_css_present(css) @step('the section release date picker is not visible$') -def the_section_release_date_picker_not_visible(step): +def the_section_release_date_picker_not_visible(_step): css = 'div.edit-subsection-publish-settings' assert not world.css_visible(css) @step('the section release date is updated$') -def the_section_release_date_is_updated(step): +def the_section_release_date_is_updated(_step): css = 'span.published-status' status_text = world.css_text(css) assert_equal(status_text, 'Will Release: 12/25/2013 at 00:00 UTC') diff --git a/cms/djangoapps/contentstore/features/video.feature b/cms/djangoapps/contentstore/features/video.feature index 07771c9d61..0129732d30 100644 --- a/cms/djangoapps/contentstore/features/video.feature +++ b/cms/djangoapps/contentstore/features/video.feature @@ -8,3 +8,8 @@ Feature: Video Component Scenario: Creating a video takes a single click Given I have clicked the new unit button Then creating a video takes a single click + + Scenario: Captions are shown correctly + Given I have created a Video component + And I have hidden captions + Then when I view the video it does not show the captions diff --git a/cms/djangoapps/contentstore/features/video.py b/cms/djangoapps/contentstore/features/video.py index 7cbe8a2258..fd8624999e 100644 --- a/cms/djangoapps/contentstore/features/video.py +++ b/cms/djangoapps/contentstore/features/video.py @@ -16,3 +16,13 @@ def video_takes_a_single_click(step): assert(not world.is_css_present('.xmodule_VideoModule')) world.css_click("a[data-location='i4x://edx/templates/video/default']") assert(world.is_css_present('.xmodule_VideoModule')) + + +@step('I have hidden captions') +def set_show_captions_false(step): + world.css_click('a.hide-subtitles') + + +@step('when I view the video it does not show the captions') +def does_not_show_captions(step): + assert world.css_find('.video')[0].has_class('closed') diff --git a/cms/djangoapps/contentstore/management/commands/empty_asset_trashcan.py b/cms/djangoapps/contentstore/management/commands/empty_asset_trashcan.py new file mode 100644 index 0000000000..9af3277a2b --- /dev/null +++ b/cms/djangoapps/contentstore/management/commands/empty_asset_trashcan.py @@ -0,0 +1,25 @@ +from django.core.management.base import BaseCommand, CommandError +from xmodule.course_module import CourseDescriptor +from xmodule.contentstore.utils import empty_asset_trashcan +from xmodule.modulestore.django import modulestore +from .prompt import query_yes_no + + +class Command(BaseCommand): + help = '''Empty the trashcan. Can pass an optional course_id to limit the damage.''' + + def handle(self, *args, **options): + if len(args) != 1 and len(args) != 0: + raise CommandError("empty_asset_trashcan requires one or no arguments: ||") + + locs = [] + + if len(args) == 1: + locs.append(CourseDescriptor.id_to_location(args[0])) + else: + courses = modulestore('direct').get_courses() + for course in courses: + locs.append(course.location) + + if query_yes_no("Emptying trashcan. Confirm?", default="no"): + empty_asset_trashcan(locs) diff --git a/cms/djangoapps/contentstore/management/commands/restore_asset_from_trashcan.py b/cms/djangoapps/contentstore/management/commands/restore_asset_from_trashcan.py new file mode 100644 index 0000000000..6770bfaf44 --- /dev/null +++ b/cms/djangoapps/contentstore/management/commands/restore_asset_from_trashcan.py @@ -0,0 +1,13 @@ +from django.core.management.base import BaseCommand, CommandError +from xmodule.contentstore.utils import restore_asset_from_trashcan + + +class Command(BaseCommand): + help = '''Restore a deleted asset from the trashcan back to it's original course''' + + def handle(self, *args, **options): + if len(args) != 1 and len(args) != 0: + raise CommandError("restore_asset_from_trashcan requires one argument: ") + + restore_asset_from_trashcan(args[0]) + diff --git a/cms/djangoapps/contentstore/tests/test_checklists.py b/cms/djangoapps/contentstore/tests/test_checklists.py index f0889b0861..54bc726092 100644 --- a/cms/djangoapps/contentstore/tests/test_checklists.py +++ b/cms/djangoapps/contentstore/tests/test_checklists.py @@ -19,6 +19,24 @@ class ChecklistTestCase(CourseTestCase): modulestore = get_modulestore(self.course.location) return modulestore.get_item(self.course.location).checklists + + def compare_checklists(self, persisted, request): + """ + Handles url expansion as possible difference and descends into guts + :param persisted: + :param request: + """ + self.assertEqual(persisted['short_description'], request['short_description']) + compare_urls = (persisted.get('action_urls_expanded') == request.get('action_urls_expanded')) + for pers, req in zip(persisted['items'], request['items']): + self.assertEqual(pers['short_description'], req['short_description']) + self.assertEqual(pers['long_description'], req['long_description']) + self.assertEqual(pers['is_checked'], req['is_checked']) + if compare_urls: + self.assertEqual(pers['action_url'], req['action_url']) + self.assertEqual(pers['action_text'], req['action_text']) + self.assertEqual(pers['action_external'], req['action_external']) + def test_get_checklists(self): """ Tests the get checklists method. """ checklists_url = get_url_reverse('Checklists', self.course) @@ -31,9 +49,9 @@ class ChecklistTestCase(CourseTestCase): self.course.checklists = None modulestore = get_modulestore(self.course.location) modulestore.update_metadata(self.course.location, own_metadata(self.course)) - self.assertEquals(self.get_persisted_checklists(), None) + self.assertEqual(self.get_persisted_checklists(), None) response = self.client.get(checklists_url) - self.assertEquals(payload, response.content) + self.assertEqual(payload, response.content) def test_update_checklists_no_index(self): """ No checklist index, should return all of them. """ @@ -43,7 +61,8 @@ class ChecklistTestCase(CourseTestCase): 'name': self.course.location.name}) returned_checklists = json.loads(self.client.get(update_url).content) - self.assertListEqual(self.get_persisted_checklists(), returned_checklists) + for pay, resp in zip(self.get_persisted_checklists(), returned_checklists): + self.compare_checklists(pay, resp) def test_update_checklists_index_ignored_on_get(self): """ Checklist index ignored on get. """ @@ -53,7 +72,8 @@ class ChecklistTestCase(CourseTestCase): 'checklist_index': 1}) returned_checklists = json.loads(self.client.get(update_url).content) - self.assertListEqual(self.get_persisted_checklists(), returned_checklists) + for pay, resp in zip(self.get_persisted_checklists(), returned_checklists): + self.compare_checklists(pay, resp) def test_update_checklists_post_no_index(self): """ No checklist index, will error on post. """ @@ -78,13 +98,18 @@ class ChecklistTestCase(CourseTestCase): 'course': self.course.location.course, 'name': self.course.location.name, 'checklist_index': 2}) + + def get_first_item(checklist): + return checklist['items'][0] + payload = self.course.checklists[2] - self.assertFalse(payload.get('is_checked')) - payload['is_checked'] = True + self.assertFalse(get_first_item(payload).get('is_checked')) + get_first_item(payload)['is_checked'] = True returned_checklist = json.loads(self.client.post(update_url, json.dumps(payload), "application/json").content) - self.assertTrue(returned_checklist.get('is_checked')) - self.assertEqual(self.get_persisted_checklists()[2], returned_checklist) + self.assertTrue(get_first_item(returned_checklist).get('is_checked')) + pers = self.get_persisted_checklists() + self.compare_checklists(pers[2], returned_checklist) def test_update_checklists_delete_unsupported(self): """ Delete operation is not supported. """ @@ -93,4 +118,4 @@ class ChecklistTestCase(CourseTestCase): 'name': self.course.location.name, 'checklist_index': 100}) response = self.client.delete(update_url) - self.assertContains(response, 'Unsupported request', status_code=400) \ No newline at end of file + self.assertContains(response, 'Unsupported request', status_code=400) diff --git a/cms/djangoapps/contentstore/tests/test_contentstore.py b/cms/djangoapps/contentstore/tests/test_contentstore.py index 232b68ecc8..9346d2189d 100644 --- a/cms/djangoapps/contentstore/tests/test_contentstore.py +++ b/cms/djangoapps/contentstore/tests/test_contentstore.py @@ -28,6 +28,8 @@ from xmodule.templates import update_templates from xmodule.modulestore.xml_exporter import export_to_xml from xmodule.modulestore.xml_importer import import_from_xml, perform_xlint from xmodule.modulestore.inheritance import own_metadata +from xmodule.contentstore.content import StaticContent +from xmodule.contentstore.utils import restore_asset_from_trashcan, empty_asset_trashcan from xmodule.capa_module import CapaDescriptor from xmodule.course_module import CourseDescriptor @@ -35,6 +37,7 @@ from xmodule.seq_module import SequenceDescriptor from xmodule.modulestore.exceptions import ItemNotFoundError from contentstore.views.component import ADVANCED_COMPONENT_TYPES +from xmodule.exceptions import NotFoundError from django_comment_common.utils import are_permissions_roles_seeded from xmodule.exceptions import InvalidVersionError @@ -271,7 +274,7 @@ class ContentStoreToyCourseTest(ModuleStoreTestCase): ) self.assertTrue(getattr(draft_problem, 'is_draft', False)) - #now requery with depth + # now requery with depth course = modulestore('draft').get_item( Location(['i4x', 'edX', 'simple', 'course', '2012_Fall', None]), depth=None @@ -382,6 +385,159 @@ class ContentStoreToyCourseTest(ModuleStoreTestCase): course = module_store.get_item(source_location) self.assertFalse(course.hide_progress_tab) + def test_asset_import(self): + ''' + This test validates that an image asset is imported and a thumbnail was generated for a .gif + ''' + content_store = contentstore() + + module_store = modulestore('direct') + import_from_xml(module_store, 'common/test/data/', ['full'], static_content_store=content_store) + + course_location = CourseDescriptor.id_to_location('edX/full/6.002_Spring_2012') + course = module_store.get_item(course_location) + + self.assertIsNotNone(course) + + # make sure we have some assets in our contentstore + all_assets = content_store.get_all_content_for_course(course_location) + self.assertGreater(len(all_assets), 0) + + # make sure we have some thumbnails in our contentstore + all_thumbnails = content_store.get_all_content_thumbnails_for_course(course_location) + + # + # cdodge: temporarily comment out assertion on thumbnails because many environments + # will not have the jpeg converter installed and this test will fail + # + # + # self.assertGreater(len(all_thumbnails), 0) + + content = None + try: + location = StaticContent.get_location_from_path('/c4x/edX/full/asset/circuits_duality.gif') + content = content_store.find(location) + except NotFoundError: + pass + + self.assertIsNotNone(content) + + # + # cdodge: temporarily comment out assertion on thumbnails because many environments + # will not have the jpeg converter installed and this test will fail + # + # self.assertIsNotNone(content.thumbnail_location) + # + # thumbnail = None + # try: + # thumbnail = content_store.find(content.thumbnail_location) + # except: + # pass + # + # self.assertIsNotNone(thumbnail) + + def test_asset_delete_and_restore(self): + ''' + This test will exercise the soft delete/restore functionality of the assets + ''' + content_store = contentstore() + trash_store = contentstore('trashcan') + module_store = modulestore('direct') + + import_from_xml(module_store, 'common/test/data/', ['full'], static_content_store=content_store) + + # look up original (and thumbnail) in content store, should be there after import + location = StaticContent.get_location_from_path('/c4x/edX/full/asset/circuits_duality.gif') + content = content_store.find(location, throw_on_not_found=False) + thumbnail_location = content.thumbnail_location + self.assertIsNotNone(content) + + # + # cdodge: temporarily comment out assertion on thumbnails because many environments + # will not have the jpeg converter installed and this test will fail + # + # self.assertIsNotNone(thumbnail_location) + + # go through the website to do the delete, since the soft-delete logic is in the view + + url = reverse('remove_asset', kwargs={'org': 'edX', 'course': 'full', 'name': '6.002_Spring_2012'}) + resp = self.client.post(url, {'location': '/c4x/edX/full/asset/circuits_duality.gif'}) + self.assertEqual(resp.status_code, 200) + + asset_location = StaticContent.get_location_from_path('/c4x/edX/full/asset/circuits_duality.gif') + + # now try to find it in store, but they should not be there any longer + content = content_store.find(asset_location, throw_on_not_found=False) + self.assertIsNone(content) + + if thumbnail_location: + thumbnail = content_store.find(thumbnail_location, throw_on_not_found=False) + self.assertIsNone(thumbnail) + + # now try to find it and the thumbnail in trashcan - should be in there + content = trash_store.find(asset_location, throw_on_not_found=False) + self.assertIsNotNone(content) + + if thumbnail_location: + thumbnail = trash_store.find(thumbnail_location, throw_on_not_found=False) + self.assertIsNotNone(thumbnail) + + # let's restore the asset + restore_asset_from_trashcan('/c4x/edX/full/asset/circuits_duality.gif') + + # now try to find it in courseware store, and they should be back after restore + content = content_store.find(asset_location, throw_on_not_found=False) + self.assertIsNotNone(content) + + if thumbnail_location: + thumbnail = content_store.find(thumbnail_location, throw_on_not_found=False) + self.assertIsNotNone(thumbnail) + + def test_empty_trashcan(self): + ''' + This test will exercise the empting of the asset trashcan + ''' + content_store = contentstore() + trash_store = contentstore('trashcan') + module_store = modulestore('direct') + + import_from_xml(module_store, 'common/test/data/', ['full'], static_content_store=content_store) + + course_location = CourseDescriptor.id_to_location('edX/full/6.002_Spring_2012') + + location = StaticContent.get_location_from_path('/c4x/edX/full/asset/circuits_duality.gif') + content = content_store.find(location, throw_on_not_found=False) + self.assertIsNotNone(content) + + # go through the website to do the delete, since the soft-delete logic is in the view + + url = reverse('remove_asset', kwargs={'org': 'edX', 'course': 'full', 'name': '6.002_Spring_2012'}) + resp = self.client.post(url, {'location': '/c4x/edX/full/asset/circuits_duality.gif'}) + self.assertEqual(resp.status_code, 200) + + # make sure there's something in the trashcan + all_assets = trash_store.get_all_content_for_course(course_location) + self.assertGreater(len(all_assets), 0) + + # make sure we have some thumbnails in our trashcan + all_thumbnails = trash_store.get_all_content_thumbnails_for_course(course_location) + # + # cdodge: temporarily comment out assertion on thumbnails because many environments + # will not have the jpeg converter installed and this test will fail + # + # self.assertGreater(len(all_thumbnails), 0) + + # empty the trashcan + empty_asset_trashcan([course_location]) + + # make sure trashcan is empty + all_assets = trash_store.get_all_content_for_course(course_location) + self.assertEqual(len(all_assets), 0) + + + all_thumbnails = trash_store.get_all_content_thumbnails_for_course(course_location) + self.assertEqual(len(all_thumbnails), 0) + def test_clone_course(self): course_data = { @@ -539,7 +695,7 @@ class ContentStoreToyCourseTest(ModuleStoreTestCase): on_disk = loads(grading_policy.read()) self.assertEqual(on_disk, course.grading_policy) - #check for policy.json + # check for policy.json self.assertTrue(filesystem.exists('policy.json')) # compare what's on disk to what we have in the course module diff --git a/cms/djangoapps/contentstore/tests/test_course_settings.py b/cms/djangoapps/contentstore/tests/test_course_settings.py index 2a4ff46038..8c15b1ae95 100644 --- a/cms/djangoapps/contentstore/tests/test_course_settings.py +++ b/cms/djangoapps/contentstore/tests/test_course_settings.py @@ -54,6 +54,7 @@ class CourseDetailsTestCase(CourseTestCase): def test_virgin_fetch(self): details = CourseDetails.fetch(self.course_location) self.assertEqual(details.course_location, self.course_location, "Location not copied into") + self.assertIsNotNone(details.start_date.tzinfo) self.assertIsNone(details.end_date, "end date somehow initialized " + str(details.end_date)) self.assertIsNone(details.enrollment_start, "enrollment_start date somehow initialized " + str(details.enrollment_start)) self.assertIsNone(details.enrollment_end, "enrollment_end date somehow initialized " + str(details.enrollment_end)) @@ -67,7 +68,6 @@ class CourseDetailsTestCase(CourseTestCase): jsondetails = json.dumps(details, cls=CourseSettingsEncoder) jsondetails = json.loads(jsondetails) self.assertTupleEqual(Location(jsondetails['course_location']), self.course_location, "Location !=") - # Note, start_date is being initialized someplace. I'm not sure why b/c the default will make no sense. self.assertIsNone(jsondetails['end_date'], "end date somehow initialized ") self.assertIsNone(jsondetails['enrollment_start'], "enrollment_start date somehow initialized ") self.assertIsNone(jsondetails['enrollment_end'], "enrollment_end date somehow initialized ") @@ -76,6 +76,23 @@ class CourseDetailsTestCase(CourseTestCase): self.assertIsNone(jsondetails['intro_video'], "intro_video somehow initialized") self.assertIsNone(jsondetails['effort'], "effort somehow initialized") + def test_ooc_encoder(self): + """ + Test the encoder out of its original constrained purpose to see if it functions for general use + """ + details = {'location': Location(['tag', 'org', 'course', 'category', 'name']), + 'number': 1, + 'string': 'string', + 'datetime': datetime.datetime.now(UTC())} + jsondetails = json.dumps(details, cls=CourseSettingsEncoder) + jsondetails = json.loads(jsondetails) + + self.assertIn('location', jsondetails) + self.assertIn('org', jsondetails['location']) + self.assertEquals('org', jsondetails['location'][1]) + self.assertEquals(1, jsondetails['number']) + self.assertEqual(jsondetails['string'], 'string') + def test_update_and_fetch(self): # # NOTE: I couldn't figure out how to validly test time setting w/ all the conversions jsondetails = CourseDetails.fetch(self.course_location) @@ -116,11 +133,8 @@ class CourseDetailsViewTest(CourseTestCase): self.compare_details_with_encoding(json.loads(resp.content), details.__dict__, field + str(val)) @staticmethod - def convert_datetime_to_iso(datetime): - if datetime is not None: - return datetime.isoformat("T") - else: - return None + def convert_datetime_to_iso(dt): + return Date().to_json(dt) def test_update_and_fetch(self): details = CourseDetails.fetch(self.course_location) @@ -151,22 +165,12 @@ class CourseDetailsViewTest(CourseTestCase): self.assertEqual(details['intro_video'], encoded.get('intro_video', None), context + " intro_video not ==") self.assertEqual(details['effort'], encoded['effort'], context + " efforts not ==") - @staticmethod - def struct_to_datetime(struct_time): - return datetime.datetime(*struct_time[:6], tzinfo=UTC()) - def compare_date_fields(self, details, encoded, context, field): if details[field] is not None: date = Date() if field in encoded and encoded[field] is not None: - encoded_encoded = date.from_json(encoded[field]) - dt1 = CourseDetailsViewTest.struct_to_datetime(encoded_encoded) - - if isinstance(details[field], datetime.datetime): - dt2 = details[field] - else: - details_encoded = date.from_json(details[field]) - dt2 = CourseDetailsViewTest.struct_to_datetime(details_encoded) + dt1 = date.from_json(encoded[field]) + dt2 = details[field] expected_delta = datetime.timedelta(0) self.assertEqual(dt1 - dt2, expected_delta, str(dt1) + "!=" + str(dt2) + " at " + context) diff --git a/cms/djangoapps/contentstore/tests/tests.py b/cms/djangoapps/contentstore/tests/tests.py index f769652493..f7f330f91e 100644 --- a/cms/djangoapps/contentstore/tests/tests.py +++ b/cms/djangoapps/contentstore/tests/tests.py @@ -3,6 +3,10 @@ from django.core.urlresolvers import reverse from .utils import parse_json, user, registration from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase +from contentstore.tests.test_course_settings import CourseTestCase +from xmodule.modulestore.tests.factories import CourseFactory +import datetime +from pytz import UTC class ContentStoreTestCase(ModuleStoreTestCase): @@ -162,3 +166,21 @@ class AuthTestCase(ContentStoreTestCase): self.assertEqual(resp.status_code, 302) # Logged in should work. + + +class ForumTestCase(CourseTestCase): + def setUp(self): + """ Creates the test course. """ + super(ForumTestCase, self).setUp() + self.course = CourseFactory.create(org='testX', number='727', display_name='Forum Course') + + def test_blackouts(self): + now = datetime.datetime.now(UTC) + self.course.discussion_blackouts = [(t.isoformat(), t2.isoformat()) for t, t2 in + [(now - datetime.timedelta(days=14), now - datetime.timedelta(days=11)), + (now + datetime.timedelta(days=24), now + datetime.timedelta(days=30))]] + self.assertTrue(self.course.forum_posts_allowed) + self.course.discussion_blackouts = [(t.isoformat(), t2.isoformat()) for t, t2 in + [(now - datetime.timedelta(days=14), now + datetime.timedelta(days=2)), + (now + datetime.timedelta(days=24), now + datetime.timedelta(days=30))]] + self.assertFalse(self.course.forum_posts_allowed) diff --git a/cms/djangoapps/contentstore/views/assets.py b/cms/djangoapps/contentstore/views/assets.py index b5041d3e9f..400013b59b 100644 --- a/cms/djangoapps/contentstore/views/assets.py +++ b/cms/djangoapps/contentstore/views/assets.py @@ -25,6 +25,8 @@ from xmodule.modulestore.django import modulestore from xmodule.modulestore import Location from xmodule.contentstore.content import StaticContent from xmodule.util.date_utils import get_default_time_display +from xmodule.modulestore import InvalidLocationError +from xmodule.exceptions import NotFoundError from ..utils import get_url_reverse from .access import get_location_and_verify_access @@ -62,7 +64,7 @@ def asset_index(request, org, course, name): asset_id = asset['_id'] display_info = {} display_info['displayname'] = asset['displayname'] - display_info['uploadDate'] = get_default_time_display(asset['uploadDate'].timetuple()) + display_info['uploadDate'] = get_default_time_display(asset['uploadDate']) asset_location = StaticContent.compute_location(asset_id['org'], asset_id['course'], asset_id['name']) display_info['url'] = StaticContent.get_url_path_from_location(asset_location) @@ -78,10 +80,17 @@ def asset_index(request, org, course, name): 'active_tab': 'assets', 'context_course': course_module, 'assets': asset_display, - 'upload_asset_callback_url': upload_asset_callback_url + 'upload_asset_callback_url': upload_asset_callback_url, + 'remove_asset_callback_url': reverse('remove_asset', kwargs={ + 'org': org, + 'course': course, + 'name': name + }) }) +@login_required +@ensure_csrf_cookie def upload_asset(request, org, course, coursename): ''' cdodge: this method allows for POST uploading of files into the course asset library, which will @@ -103,6 +112,9 @@ def upload_asset(request, org, course, coursename): logging.error('Could not find course' + location) return HttpResponseBadRequest() + if 'file' not in request.FILES: + return HttpResponseBadRequest() + # compute a 'filename' which is similar to the location formatting, we're using the 'filename' # nomenclature since we're using a FileSystem paradigm here. We're just imposing # the Location string formatting expectations to keep things a bit more consistent @@ -131,7 +143,7 @@ def upload_asset(request, org, course, coursename): readback = contentstore().find(content.location) response_payload = {'displayname': content.name, - 'uploadDate': get_default_time_display(readback.last_modified_at.timetuple()), + 'uploadDate': get_default_time_display(readback.last_modified_at), 'url': StaticContent.get_url_path_from_location(content.location), 'thumb_url': StaticContent.get_url_path_from_location(thumbnail_location) if thumbnail_content is not None else None, 'msg': 'Upload completed' @@ -142,6 +154,57 @@ def upload_asset(request, org, course, coursename): return response +@ensure_csrf_cookie +@login_required +def remove_asset(request, org, course, name): + ''' + This method will perform a 'soft-delete' of an asset, which is basically to copy the asset from + the main GridFS collection and into a Trashcan + ''' + get_location_and_verify_access(request, org, course, name) + + location = request.POST['location'] + + # make sure the location is valid + try: + loc = StaticContent.get_location_from_path(location) + except InvalidLocationError: + # return a 'Bad Request' to browser as we have a malformed Location + response = HttpResponse() + response.status_code = 400 + return response + + # also make sure the item to delete actually exists + try: + content = contentstore().find(loc) + except NotFoundError: + response = HttpResponse() + response.status_code = 404 + return response + + # ok, save the content into the trashcan + contentstore('trashcan').save(content) + + # see if there is a thumbnail as well, if so move that as well + if content.thumbnail_location is not None: + try: + thumbnail_content = contentstore().find(content.thumbnail_location) + contentstore('trashcan').save(thumbnail_content) + # hard delete thumbnail from origin + contentstore().delete(thumbnail_content.get_id()) + # remove from any caching + del_cached_content(thumbnail_content.location) + except: + pass # OK if this is left dangling + + # delete the original + contentstore().delete(content.get_id()) + # remove from cache + del_cached_content(content.location) + + return HttpResponse() + + @ensure_csrf_cookie @login_required def import_course(request, org, course, name): @@ -227,11 +290,9 @@ def generate_export_course(request, org, course, name): root_dir = path(mkdtemp()) # export out to a tempdir - logging.debug('root = {0}'.format(root_dir)) export_to_xml(modulestore('direct'), contentstore(), loc, root_dir, name, modulestore()) - #filename = root_dir / name + '.tar.gz' logging.debug('tar file being generated at {0}'.format(export_file.name)) tar_file = tarfile.open(name=export_file.name, mode='w:gz') diff --git a/cms/djangoapps/contentstore/views/course.py b/cms/djangoapps/contentstore/views/course.py index 07f6b9669c..8762eb3a2a 100644 --- a/cms/djangoapps/contentstore/views/course.py +++ b/cms/djangoapps/contentstore/views/course.py @@ -2,7 +2,6 @@ Views related to operations on course objects """ import json -import time from django.contrib.auth.decorators import login_required from django_future.csrf import ensure_csrf_cookie @@ -32,6 +31,8 @@ from .component import OPEN_ENDED_COMPONENT_TYPES, \ NOTE_COMPONENT_TYPES, ADVANCED_COMPONENT_POLICY_KEY from django_comment_common.utils import seed_permissions_roles +import datetime +from django.utils.timezone import UTC # TODO: should explicitly enumerate exports with __all__ @@ -130,7 +131,7 @@ def create_new_course(request): new_course.display_name = display_name # set a default start date to now - new_course.start = time.gmtime() + new_course.start = datetime.datetime.now(UTC()) initialize_course_tabs(new_course) @@ -357,52 +358,55 @@ def course_advanced_updates(request, org, course, name): # Whether or not to filter the tabs key out of the settings metadata filter_tabs = True - #Check to see if the user instantiated any advanced components. This is a hack - #that does the following : - # 1) adds/removes the open ended panel tab to a course automatically if the user + # Check to see if the user instantiated any advanced components. This is a hack + # that does the following : + # 1) adds/removes the open ended panel tab to a course automatically if the user # has indicated that they want to edit the combinedopendended or peergrading module # 2) adds/removes the notes panel tab to a course automatically if the user has # indicated that they want the notes module enabled in their course # TODO refactor the above into distinct advanced policy settings if ADVANCED_COMPONENT_POLICY_KEY in request_body: - #Get the course so that we can scrape current tabs + # Get the course so that we can scrape current tabs course_module = modulestore().get_item(location) - #Maps tab types to components + # Maps tab types to components tab_component_map = { - 'open_ended': OPEN_ENDED_COMPONENT_TYPES, + 'open_ended': OPEN_ENDED_COMPONENT_TYPES, 'notes': NOTE_COMPONENT_TYPES, } - #Check to see if the user instantiated any notes or open ended components + # Check to see if the user instantiated any notes or open ended components for tab_type in tab_component_map.keys(): component_types = tab_component_map.get(tab_type) found_ac_type = False for ac_type in component_types: if ac_type in request_body[ADVANCED_COMPONENT_POLICY_KEY]: - #Add tab to the course if needed + # Add tab to the course if needed changed, new_tabs = add_extra_panel_tab(tab_type, course_module) - #If a tab has been added to the course, then send the metadata along to CourseMetadata.update_from_json + # If a tab has been added to the course, then send the metadata along to CourseMetadata.update_from_json if changed: course_module.tabs = new_tabs request_body.update({'tabs': new_tabs}) - #Indicate that tabs should not be filtered out of the metadata + # Indicate that tabs should not be filtered out of the metadata filter_tabs = False - #Set this flag to avoid the tab removal code below. + # Set this flag to avoid the tab removal code below. found_ac_type = True break - #If we did not find a module type in the advanced settings, + # If we did not find a module type in the advanced settings, # we may need to remove the tab from the course. if not found_ac_type: - #Remove tab from the course if needed + # Remove tab from the course if needed changed, new_tabs = remove_extra_panel_tab(tab_type, course_module) if changed: course_module.tabs = new_tabs request_body.update({'tabs': new_tabs}) - #Indicate that tabs should *not* be filtered out of the metadata + # Indicate that tabs should *not* be filtered out of the metadata filter_tabs = False - - response_json = json.dumps(CourseMetadata.update_from_json(location, + try: + response_json = json.dumps(CourseMetadata.update_from_json(location, request_body, filter_tabs=filter_tabs)) + except (TypeError, ValueError), e: + return HttpResponseBadRequest("Incorrect setting format. " + str(e), content_type="text/plain") + return HttpResponse(response_json, mimetype="application/json") diff --git a/cms/djangoapps/models/settings/course_details.py b/cms/djangoapps/models/settings/course_details.py index 0dbb47b31b..07eb4bc309 100644 --- a/cms/djangoapps/models/settings/course_details.py +++ b/cms/djangoapps/models/settings/course_details.py @@ -3,26 +3,26 @@ from xmodule.modulestore.exceptions import ItemNotFoundError from xmodule.modulestore.inheritance import own_metadata import json from json.encoder import JSONEncoder -import time from contentstore.utils import get_modulestore from models.settings import course_grading from contentstore.utils import update_item from xmodule.fields import Date import re import logging +import datetime class CourseDetails(object): def __init__(self, location): - self.course_location = location # a Location obj + self.course_location = location # a Location obj self.start_date = None # 'start' - self.end_date = None # 'end' + self.end_date = None # 'end' self.enrollment_start = None self.enrollment_end = None - self.syllabus = None # a pdf file asset - self.overview = "" # html to render as the overview - self.intro_video = None # a video pointer - self.effort = None # int hours/week + self.syllabus = None # a pdf file asset + self.overview = "" # html to render as the overview + self.intro_video = None # a video pointer + self.effort = None # int hours/week @classmethod def fetch(cls, course_location): @@ -73,9 +73,9 @@ class CourseDetails(object): """ Decode the json into CourseDetails and save any changed attrs to the db """ - ## TODO make it an error for this to be undefined & for it to not be retrievable from modulestore + # TODO make it an error for this to be undefined & for it to not be retrievable from modulestore course_location = jsondict['course_location'] - ## Will probably want to cache the inflight courses because every blur generates an update + # Will probably want to cache the inflight courses because every blur generates an update descriptor = get_modulestore(course_location).get_item(course_location) dirty = False @@ -181,7 +181,7 @@ class CourseSettingsEncoder(json.JSONEncoder): return obj.__dict__ elif isinstance(obj, Location): return obj.dict() - elif isinstance(obj, time.struct_time): + elif isinstance(obj, datetime.datetime): return Date().to_json(obj) else: return JSONEncoder.default(self, obj) diff --git a/cms/envs/acceptance.py b/cms/envs/acceptance.py index 36616ab257..6293219f43 100644 --- a/cms/envs/acceptance.py +++ b/cms/envs/acceptance.py @@ -23,7 +23,7 @@ MODULESTORE_OPTIONS = { 'db': 'test_xmodule', 'collection': 'acceptance_modulestore', 'fs_root': TEST_ROOT / "data", - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } MODULESTORE = { diff --git a/cms/envs/aws.py b/cms/envs/aws.py index 35b15fe6ba..c6a383211f 100644 --- a/cms/envs/aws.py +++ b/cms/envs/aws.py @@ -112,9 +112,6 @@ TIME_ZONE = ENV_TOKENS.get('TIME_ZONE', TIME_ZONE) for feature, value in ENV_TOKENS.get('MITX_FEATURES', {}).items(): MITX_FEATURES[feature] = value -# load segment.io key, provide a dummy if it does not exist -SEGMENT_IO_KEY = ENV_TOKENS.get('SEGMENT_IO_KEY', '***REMOVED***') - LOGGING = get_logger_config(LOG_DIR, logging_env=ENV_TOKENS['LOGGING_ENV'], syslog_addr=(ENV_TOKENS['SYSLOG_SERVER'], 514), @@ -126,6 +123,13 @@ LOGGING = get_logger_config(LOG_DIR, with open(ENV_ROOT / CONFIG_PREFIX + "auth.json") as auth_file: AUTH_TOKENS = json.load(auth_file) +# If Segment.io key specified, load it and turn on Segment.io if the feature flag is set +# Note that this is the Studio key. There is a separate key for the LMS. +SEGMENT_IO_KEY = AUTH_TOKENS.get('SEGMENT_IO_KEY') +if SEGMENT_IO_KEY: + MITX_FEATURES['SEGMENT_IO'] = ENV_TOKENS.get('SEGMENT_IO', False) + + AWS_ACCESS_KEY_ID = AUTH_TOKENS["AWS_ACCESS_KEY_ID"] AWS_SECRET_ACCESS_KEY = AUTH_TOKENS["AWS_SECRET_ACCESS_KEY"] DATABASES = AUTH_TOKENS['DATABASES'] diff --git a/cms/envs/common.py b/cms/envs/common.py index 22e69fa08a..8551a56c41 100644 --- a/cms/envs/common.py +++ b/cms/envs/common.py @@ -25,19 +25,30 @@ Longer TODO: import sys import lms.envs.common +from lms.envs.common import USE_TZ from path import path ############################ FEATURE CONFIGURATION ############################# MITX_FEATURES = { 'USE_DJANGO_PIPELINE': True, + 'GITHUB_PUSH': False, + 'ENABLE_DISCUSSION_SERVICE': False, + 'AUTH_USE_MIT_CERTIFICATES': False, - 'STUB_VIDEO_FOR_TESTING': False, # do not display video when running automated acceptance tests - 'STAFF_EMAIL': '', # email address for staff (eg to request course creation) + + # do not display video when running automated acceptance tests + 'STUB_VIDEO_FOR_TESTING': False, + + # email address for staff (eg to request course creation) + 'STAFF_EMAIL': '', + 'STUDIO_NPS_SURVEY': True, - 'SEGMENT_IO': True, + + # Segment.io - must explicitly turn it on for production + 'SEGMENT_IO': False, # Enable URL that shows information about the status of various services 'ENABLE_SERVICE_STATUS': False, @@ -183,7 +194,7 @@ STATICFILES_DIRS = [ # Locale/Internationalization TIME_ZONE = 'America/New_York' # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name -LANGUAGE_CODE = 'en' # http://www.i18nguy.com/unicode/language-identifiers.html +LANGUAGE_CODE = 'en' # http://www.i18nguy.com/unicode/language-identifiers.html USE_I18N = True USE_L10N = True @@ -227,7 +238,8 @@ PIPELINE_JS = { ) + ['js/hesitate.js', 'js/base.js', 'js/models/feedback.js', 'js/views/feedback.js', 'js/models/section.js', 'js/views/section.js', - 'js/models/metadata_model.js', 'js/views/metadata_editor_view.js'], + 'js/models/metadata_model.js', 'js/views/metadata_editor_view.js', + 'js/views/assets.js'], 'output_filename': 'js/cms-application.js', 'test_order': 0 }, diff --git a/cms/envs/dev.py b/cms/envs/dev.py index eea236f0e2..07630bdf31 100644 --- a/cms/envs/dev.py +++ b/cms/envs/dev.py @@ -22,7 +22,7 @@ modulestore_options = { 'db': 'xmodule', 'collection': 'modulestore', 'fs_root': GITHUB_REPO_ROOT, - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } MODULESTORE = { @@ -43,10 +43,15 @@ CONTENTSTORE = { 'OPTIONS': { 'host': 'localhost', 'db': 'xcontent', + }, + # allow for additional options that can be keyed on a name, e.g. 'trashcan' + 'ADDITIONAL_OPTIONS': { + 'trashcan': { + 'bucket': 'trash_fs' + } } } - DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', @@ -64,7 +69,7 @@ REPOS = { }, 'content-mit-6002x': { 'branch': 'master', - #'origin': 'git@github.com:MITx/6002x-fall-2012.git', + # 'origin': 'git@github.com:MITx/6002x-fall-2012.git', 'origin': 'git@github.com:MITx/content-mit-6002x.git', }, '6.00x': { @@ -163,8 +168,14 @@ MITX_FEATURES['STUDIO_NPS_SURVEY'] = False # Enable URL that shows information about the status of variuous services MITX_FEATURES['ENABLE_SERVICE_STATUS'] = True -# segment-io key for dev -SEGMENT_IO_KEY = 'mty8edrrsg' +############################# SEGMENT-IO ################################## + +# If there's an environment variable set, grab it and turn on Segment.io +# Note that this is the Studio key. There is a separate key for the LMS. +import os +SEGMENT_IO_KEY = os.environ.get('SEGMENT_IO_KEY') +if SEGMENT_IO_KEY: + MITX_FEATURES['SEGMENT_IO'] = True ##################################################################### diff --git a/cms/envs/test.py b/cms/envs/test.py index 8a3f9ba158..954a553e10 100644 --- a/cms/envs/test.py +++ b/cms/envs/test.py @@ -48,7 +48,7 @@ MODULESTORE_OPTIONS = { 'db': 'test_xmodule', 'collection': 'test_modulestore', 'fs_root': TEST_ROOT / "data", - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } MODULESTORE = { @@ -70,7 +70,13 @@ CONTENTSTORE = { 'ENGINE': 'xmodule.contentstore.mongo.MongoContentStore', 'OPTIONS': { 'host': 'localhost', - 'db': 'xcontent', + 'db': 'test_xmodule', + }, + # allow for additional options that can be keyed on a name, e.g. 'trashcan' + 'ADDITIONAL_OPTIONS': { + 'trashcan': { + 'bucket': 'trash_fs' + } } } @@ -121,7 +127,7 @@ CELERY_RESULT_BACKEND = 'cache' BROKER_TRANSPORT = 'memory' ################### Make tests faster -#http://slacy.com/blog/2012/04/make-your-tests-faster-in-django-1-4/ +# http://slacy.com/blog/2012/04/make-your-tests-faster-in-django-1-4/ PASSWORD_HASHERS = ( 'django.contrib.auth.hashers.SHA1PasswordHasher', 'django.contrib.auth.hashers.MD5PasswordHasher', diff --git a/cms/pydev_manage.py b/cms/pydev_manage.py new file mode 100644 index 0000000000..22c38d89eb --- /dev/null +++ b/cms/pydev_manage.py @@ -0,0 +1,11 @@ +''' +Used for pydev eclipse. Should be innocuous for everyone else. +Created on May 8, 2013 + +@author: dmitchell +''' +#!/home//mitx_all/python/bin/python +from django.core import management + +if __name__ == '__main__': + management.execute_from_command_line() diff --git a/cms/static/coffee/src/views/module_edit.coffee b/cms/static/coffee/src/views/module_edit.coffee index d0a76a6c15..5154591d6f 100644 --- a/cms/static/coffee/src/views/module_edit.coffee +++ b/cms/static/coffee/src/views/module_edit.coffee @@ -44,8 +44,17 @@ class CMS.Views.ModuleEdit extends Backbone.View [@metadataEditor.getDisplayName()]) @$el.find('.component-name').html(title) + customMetadata: -> + # Hack to support metadata fields that aren't part of the metadata editor (ie, LaTeX high level source). + # Walk through the set of elements which have the 'data-metadata_name' attribute and + # build up an object to pass back to the server on the subsequent POST. + # Note that these values will always be sent back on POST, even if they did not actually change. + _metadata = {} + _metadata[$(el).data("metadata-name")] = el.value for el in $('[data-metadata-name]', @$component_editor()) + return _metadata + changedMetadata: -> - return @metadataEditor.getModifiedMetadataValues() + return _.extend(@metadataEditor.getModifiedMetadataValues(), @customMetadata()) cloneTemplate: (parent, template) -> $.post("/clone_item", { diff --git a/cms/static/js/base.js b/cms/static/js/base.js index fe60d80239..92a16b8417 100644 --- a/cms/static/js/base.js +++ b/cms/static/js/base.js @@ -32,8 +32,6 @@ $(document).ready(function() { $modal.bind('click', hideModal); $modalCover.bind('click', hideModal); - $('.uploads .upload-button').bind('click', showUploadModal); - $('.upload-modal .close-button').bind('click', hideModal); $body.on('click', '.embeddable-xml-input', function() { $(this).select(); @@ -145,8 +143,6 @@ $(document).ready(function() { $('.edit-section-start-cancel').bind('click', cancelSetSectionScheduleDate); $('.edit-section-start-save').bind('click', saveSetSectionScheduleDate); - $('.upload-modal .choose-file-button').bind('click', showFileSelectionMenu); - $body.on('click', '.section-published-date .edit-button', editSectionPublishDate); $body.on('click', '.section-published-date .schedule-button', editSectionPublishDate); $body.on('click', '.edit-subsection-publish-settings .save-button', saveSetSectionScheduleDate); @@ -398,69 +394,6 @@ function _deleteItem($el) { }); } -function showUploadModal(e) { - e.preventDefault(); - $modal = $('.upload-modal').show(); - $('.file-input').bind('change', startUpload); - $modalCover.show(); -} - -function showFileSelectionMenu(e) { - e.preventDefault(); - $('.file-input').click(); -} - -function startUpload(e) { - $('.upload-modal h1').html(gettext('Uploading…')); - $('.upload-modal .file-name').html($('.file-input').val().replace('C:\\fakepath\\', '')); - $('.upload-modal .file-chooser').ajaxSubmit({ - beforeSend: resetUploadBar, - uploadProgress: showUploadFeedback, - complete: displayFinishedUpload - }); - $('.upload-modal .choose-file-button').hide(); - $('.upload-modal .progress-bar').removeClass('loaded').show(); -} - -function resetUploadBar() { - var percentVal = '0%'; - $('.upload-modal .progress-fill').width(percentVal); - $('.upload-modal .progress-fill').html(percentVal); -} - -function showUploadFeedback(event, position, total, percentComplete) { - var percentVal = percentComplete + '%'; - $('.upload-modal .progress-fill').width(percentVal); - $('.upload-modal .progress-fill').html(percentVal); -} - -function displayFinishedUpload(xhr) { - if (xhr.status = 200) { - markAsLoaded(); - } - - var resp = JSON.parse(xhr.responseText); - $('.upload-modal .embeddable-xml-input').val(xhr.getResponseHeader('asset_url')); - $('.upload-modal .embeddable').show(); - $('.upload-modal .file-name').hide(); - $('.upload-modal .progress-fill').html(resp.msg); - $('.upload-modal .choose-file-button').html(gettext('Load Another File')).show(); - $('.upload-modal .progress-fill').width('100%'); - - // see if this id already exists, if so, then user must have updated an existing piece of content - $("tr[data-id='" + resp.url + "']").remove(); - - var template = $('#new-asset-element').html(); - var html = Mustache.to_html(template, resp); - $('table > tbody').prepend(html); - - analytics.track('Uploaded a File', { - 'course': course_location_analytics, - 'asset_url': resp.url - }); - -} - function markAsLoaded() { $('.upload-modal .copy-button').css('display', 'inline-block'); $('.upload-modal .progress-bar').addClass('loaded'); diff --git a/cms/static/js/models/feedback.js b/cms/static/js/models/feedback.js index 1f1ee57000..d57cffa779 100644 --- a/cms/static/js/models/feedback.js +++ b/cms/static/js/models/feedback.js @@ -42,6 +42,12 @@ CMS.Models.ErrorMessage = CMS.Models.SystemFeedback.extend({ }) }); +CMS.Models.ConfirmAssetDeleteMessage = CMS.Models.SystemFeedback.extend({ + defaults: $.extend({}, CMS.Models.SystemFeedback.prototype.defaults, { + "intent": "warning" + }) +}); + CMS.Models.ConfirmationMessage = CMS.Models.SystemFeedback.extend({ defaults: $.extend({}, CMS.Models.SystemFeedback.prototype.defaults, { "intent": "confirmation" diff --git a/cms/static/js/views/assets.js b/cms/static/js/views/assets.js new file mode 100644 index 0000000000..9eb521dcb6 --- /dev/null +++ b/cms/static/js/views/assets.js @@ -0,0 +1,128 @@ +$(document).ready(function() { + $('.uploads .upload-button').bind('click', showUploadModal); + $('.upload-modal .close-button').bind('click', hideModal); + $('.upload-modal .choose-file-button').bind('click', showFileSelectionMenu); + $('.remove-asset-button').bind('click', removeAsset); +}); + +function removeAsset(e){ + e.preventDefault(); + + var that = this; + var msg = new CMS.Models.ConfirmAssetDeleteMessage({ + title: gettext("Delete File Confirmation"), + message: gettext("Are you sure you wish to delete this item. It cannot be reversed!\n\nAlso any content that links/refers to this item will no longer work (e.g. broken images and/or links)"), + actions: { + primary: { + text: gettext("OK"), + click: function(view) { + // call the back-end to actually remove the asset + $.post(view.model.get('remove_asset_url'), + { 'location': view.model.get('asset_location') }, + function() { + // show the post-commit confirmation + $(".wrapper-alert-confirmation").addClass("is-shown").attr('aria-hidden','false'); + view.model.get('row_to_remove').remove(); + analytics.track('Deleted Asset', { + 'course': course_location_analytics, + 'id': view.model.get('asset_location') + }); + } + ); + view.hide(); + } + }, + secondary: [{ + text: gettext("Cancel"), + click: function(view) { + view.hide(); + } + }] + }, + remove_asset_url: $('.asset-library').data('remove-asset-callback-url'), + asset_location: $(this).closest('tr').data('id'), + row_to_remove: $(this).closest('tr') + }); + + // workaround for now. We can't spawn multiple instances of the Prompt View + // so for now, a bit of hackery to just make sure we have a single instance + // note: confirm_delete_prompt is in asset_index.html + if (confirm_delete_prompt === null) + confirm_delete_prompt = new CMS.Views.Prompt({model: msg}); + else + { + confirm_delete_prompt.model = msg; + confirm_delete_prompt.show(); + } + + return; +} + +function showUploadModal(e) { + e.preventDefault(); + $modal = $('.upload-modal').show(); + $('.file-input').bind('change', startUpload); + $modalCover.show(); +} + +function showFileSelectionMenu(e) { + e.preventDefault(); + $('.file-input').click(); +} + +function startUpload(e) { + var files = $('.file-input').get(0).files; + if (files.length === 0) + return; + + $('.upload-modal h1').html(gettext('Uploading…')); + $('.upload-modal .file-name').html(files[0].name); + $('.upload-modal .file-chooser').ajaxSubmit({ + beforeSend: resetUploadBar, + uploadProgress: showUploadFeedback, + complete: displayFinishedUpload + }); + $('.upload-modal .choose-file-button').hide(); + $('.upload-modal .progress-bar').removeClass('loaded').show(); +} + +function resetUploadBar() { + var percentVal = '0%'; + $('.upload-modal .progress-fill').width(percentVal); + $('.upload-modal .progress-fill').html(percentVal); +} + +function showUploadFeedback(event, position, total, percentComplete) { + var percentVal = percentComplete + '%'; + $('.upload-modal .progress-fill').width(percentVal); + $('.upload-modal .progress-fill').html(percentVal); +} + +function displayFinishedUpload(xhr) { + if (xhr.status == 200) { + markAsLoaded(); + } + + var resp = JSON.parse(xhr.responseText); + $('.upload-modal .embeddable-xml-input').val(xhr.getResponseHeader('asset_url')); + $('.upload-modal .embeddable').show(); + $('.upload-modal .file-name').hide(); + $('.upload-modal .progress-fill').html(resp.msg); + $('.upload-modal .choose-file-button').html(gettext('Load Another File')).show(); + $('.upload-modal .progress-fill').width('100%'); + + // see if this id already exists, if so, then user must have updated an existing piece of content + $("tr[data-id='" + resp.url + "']").remove(); + + var template = $('#new-asset-element').html(); + var html = Mustache.to_html(template, resp); + $('table > tbody').prepend(html); + + // re-bind the listeners to delete it + $('.remove-asset-button').bind('click', removeAsset); + + analytics.track('Uploaded a File', { + 'course': course_location_analytics, + 'asset_url': resp.url + }); +} \ No newline at end of file diff --git a/cms/static/sass/views/_assets.scss b/cms/static/sass/views/_assets.scss index d01dd988ef..d4cff42ee9 100644 --- a/cms/static/sass/views/_assets.scss +++ b/cms/static/sass/views/_assets.scss @@ -76,6 +76,10 @@ body.course.uploads { width: 250px; } + .delete-col { + width: 20px; + } + .embeddable-xml-input { @include box-shadow(none); width: 100%; diff --git a/cms/templates/asset_index.html b/cms/templates/asset_index.html index f03a9012f8..0006d29d38 100644 --- a/cms/templates/asset_index.html +++ b/cms/templates/asset_index.html @@ -1,5 +1,6 @@ <%inherit file="base.html" /> <%! from django.core.urlresolvers import reverse %> +<%! from django.utils.translation import ugettext as _ %> <%block name="bodyclass">is-signedin course uploads <%block name="title">Files & Uploads @@ -7,6 +8,11 @@ <%block name="jsextra"> + + <%block name="content"> @@ -30,6 +36,9 @@ + + + @@ -56,7 +65,7 @@
-
+
@@ -64,6 +73,7 @@ + @@ -86,6 +96,9 @@ + % endfor @@ -129,3 +142,21 @@ + +<%block name="view_alerts"> + +
+
+ + +
+

${_('Your file has been deleted.')}

+
+ + + + ${_('close alert')} + +
+
+ diff --git a/cms/templates/edit_subsection.html b/cms/templates/edit_subsection.html index 9bb9b3a506..cbce91ab44 100644 --- a/cms/templates/edit_subsection.html +++ b/cms/templates/edit_subsection.html @@ -1,7 +1,7 @@ <%inherit file="base.html" /> <%! import logging - from xmodule.util.date_utils import get_time_struct_display + from xmodule.util.date_utils import get_default_time_display %> <%! from django.core.urlresolvers import reverse %> @@ -36,11 +36,15 @@
- +
- +
% if subsection.lms.start != parent_item.lms.start and subsection.lms.start: @@ -48,7 +52,7 @@

The date above differs from the release date of ${parent_item.display_name_with_default}, which is unset. % else:

The date above differs from the release date of ${parent_item.display_name_with_default} – - ${get_time_struct_display(parent_item.lms.start, '%m/%d/%Y at %H:%M UTC')}. + ${get_default_time_display(parent_item.lms.start)}. % endif Sync to ${parent_item.display_name_with_default}.

% endif @@ -65,11 +69,15 @@
- +
- +
Remove due date
diff --git a/cms/templates/overview.html b/cms/templates/overview.html index d327c8b324..43d0afc263 100644 --- a/cms/templates/overview.html +++ b/cms/templates/overview.html @@ -1,7 +1,7 @@ <%inherit file="base.html" /> <%! import logging - from xmodule.util.date_utils import get_time_struct_display + from xmodule.util import date_utils %> <%! from django.core.urlresolvers import reverse %> <%block name="title">Course Outline @@ -154,14 +154,19 @@

diff --git a/cms/urls.py b/cms/urls.py index e7444de4e9..a9a7f0a68a 100644 --- a/cms/urls.py +++ b/cms/urls.py @@ -35,6 +35,8 @@ urlpatterns = ('', # nopep8 'contentstore.views.preview_dispatch', name='preview_dispatch'), url(r'^(?P[^/]+)/(?P[^/]+)/course/(?P[^/]+)/upload_asset$', 'contentstore.views.upload_asset', name='upload_asset'), + + url(r'^manage_users/(?P.*?)$', 'contentstore.views.manage_users', name='manage_users'), url(r'^add_user/(?P.*?)$', 'contentstore.views.add_user', name='add_user'), @@ -71,8 +73,11 @@ urlpatterns = ('', # nopep8 'contentstore.views.edit_static', name='edit_static'), url(r'^edit_tabs/(?P[^/]+)/(?P[^/]+)/course/(?P[^/]+)$', 'contentstore.views.edit_tabs', name='edit_tabs'), + url(r'^(?P[^/]+)/(?P[^/]+)/assets/(?P[^/]+)$', 'contentstore.views.asset_index', name='asset_index'), + url(r'^(?P[^/]+)/(?P[^/]+)/assets/(?P[^/]+)/remove$', + 'contentstore.views.assets.remove_asset', name='remove_asset'), # this is a generic method to return the data/metadata associated with a xmodule url(r'^module_info/(?P.*)$', diff --git a/cms/xmodule_namespace.py b/cms/xmodule_namespace.py index 4857fe68ca..eef4b41f37 100644 --- a/cms/xmodule_namespace.py +++ b/cms/xmodule_namespace.py @@ -5,7 +5,6 @@ Namespace defining common fields used by Studio for all blocks import datetime from xblock.core import Namespace, Scope, ModelType, String -from xmodule.fields import StringyBoolean class DateTuple(ModelType): @@ -28,4 +27,3 @@ class CmsNamespace(Namespace): """ published_date = DateTuple(help="Date when the module was published", scope=Scope.settings) published_by = String(help="Id of the user who published this module", scope=Scope.settings) - diff --git a/common/djangoapps/contentserver/middleware.py b/common/djangoapps/contentserver/middleware.py index 8e9e70046d..7deb0901aa 100644 --- a/common/djangoapps/contentserver/middleware.py +++ b/common/djangoapps/contentserver/middleware.py @@ -1,7 +1,4 @@ -import logging -import time - -from django.http import HttpResponse, Http404, HttpResponseNotModified +from django.http import HttpResponse, HttpResponseNotModified from xmodule.contentstore.django import contentstore from xmodule.contentstore.content import StaticContent, XASSET_LOCATION_TAG @@ -20,7 +17,7 @@ class StaticContentServer(object): # return a 'Bad Request' to browser as we have a malformed Location response = HttpResponse() response.status_code = 400 - return response + return response # first look in our cache so we don't have to round-trip to the DB content = get_cached_content(loc) diff --git a/common/djangoapps/mitxmako/tests.py b/common/djangoapps/mitxmako/tests.py index f419daa681..e7e56a9472 100644 --- a/common/djangoapps/mitxmako/tests.py +++ b/common/djangoapps/mitxmako/tests.py @@ -1,18 +1,15 @@ from django.test import TestCase from django.test.utils import override_settings from django.core.urlresolvers import reverse -from django.conf import settings from mitxmako.shortcuts import marketing_link from mock import patch -from nose.plugins.skip import SkipTest +from util.testing import UrlResetMixin -class ShortcutsTests(TestCase): + +class ShortcutsTests(UrlResetMixin, TestCase): """ Test the mitxmako shortcuts file """ - # TODO: fix this test. It is causing intermittent test failures on - # subsequent tests due to the way urls are loaded - raise SkipTest() @override_settings(MKTG_URLS={'ROOT': 'dummy-root', 'ABOUT': '/about-us'}) @override_settings(MKTG_URL_LINK_MAP={'ABOUT': 'login'}) def test_marketing_link(self): diff --git a/common/djangoapps/student/management/commands/assigngroups.py b/common/djangoapps/student/management/commands/assigngroups.py index fb7bfc85cd..5269c8690e 100644 --- a/common/djangoapps/student/management/commands/assigngroups.py +++ b/common/djangoapps/student/management/commands/assigngroups.py @@ -14,6 +14,7 @@ import sys import datetime import json +from pytz import UTC middleware.MakoMiddleware() @@ -32,7 +33,7 @@ def group_from_value(groups, v): class Command(BaseCommand): - help = \ + help = \ ''' Assign users to test groups. Takes a list of groups: a:0.3,b:0.4,c:0.3 file.txt "Testing something" @@ -75,7 +76,7 @@ Will log what happened to file.txt. utg = UserTestGroup() utg.name = group utg.description = json.dumps({"description": args[2]}, - {"time": datetime.datetime.utcnow().isoformat()}) + {"time": datetime.datetime.now(UTC).isoformat()}) group_objects[group] = utg group_objects[group].save() diff --git a/common/djangoapps/student/management/commands/pearson_export_cdd.py b/common/djangoapps/student/management/commands/pearson_export_cdd.py index bad98b9d25..efb4a55387 100644 --- a/common/djangoapps/student/management/commands/pearson_export_cdd.py +++ b/common/djangoapps/student/management/commands/pearson_export_cdd.py @@ -8,6 +8,7 @@ from django.conf import settings from django.core.management.base import BaseCommand, CommandError from student.models import TestCenterUser +from pytz import UTC class Command(BaseCommand): @@ -58,7 +59,7 @@ class Command(BaseCommand): def handle(self, **options): # update time should use UTC in order to be comparable to the user_updated_at # field - uploaded_at = datetime.utcnow() + uploaded_at = datetime.now(UTC) # if specified destination is an existing directory, then # create a filename for it automatically. If it doesn't exist, @@ -100,7 +101,7 @@ class Command(BaseCommand): extrasaction='ignore') writer.writeheader() for tcu in TestCenterUser.objects.order_by('id'): - if tcu.needs_uploading: # or dump_all + if tcu.needs_uploading: # or dump_all record = dict((csv_field, ensure_encoding(getattr(tcu, model_field))) for csv_field, model_field in Command.CSV_TO_MODEL_FIELDS.items()) diff --git a/common/djangoapps/student/management/commands/pearson_export_ead.py b/common/djangoapps/student/management/commands/pearson_export_ead.py index 03dbce0024..ec10ab1599 100644 --- a/common/djangoapps/student/management/commands/pearson_export_ead.py +++ b/common/djangoapps/student/management/commands/pearson_export_ead.py @@ -8,6 +8,7 @@ from django.conf import settings from django.core.management.base import BaseCommand, CommandError from student.models import TestCenterRegistration, ACCOMMODATION_REJECTED_CODE +from pytz import UTC class Command(BaseCommand): @@ -51,7 +52,7 @@ class Command(BaseCommand): def handle(self, **options): # update time should use UTC in order to be comparable to the user_updated_at # field - uploaded_at = datetime.utcnow() + uploaded_at = datetime.now(UTC) # if specified destination is an existing directory, then # create a filename for it automatically. If it doesn't exist, diff --git a/common/djangoapps/student/management/commands/pearson_import_conf_zip.py b/common/djangoapps/student/management/commands/pearson_import_conf_zip.py index d0b2938df0..2339383719 100644 --- a/common/djangoapps/student/management/commands/pearson_import_conf_zip.py +++ b/common/djangoapps/student/management/commands/pearson_import_conf_zip.py @@ -13,6 +13,7 @@ from django.core.management.base import BaseCommand, CommandError from django.conf import settings from student.models import TestCenterUser, TestCenterRegistration +from pytz import UTC class Command(BaseCommand): @@ -68,7 +69,7 @@ class Command(BaseCommand): Command.datadog_error("Found authorization record for user {}".format(registration.testcenter_user.user.username), eacfile.name) # now update the record: registration.upload_status = row['Status'] - registration.upload_error_message = row['Message'] + registration.upload_error_message = row['Message'] try: registration.processed_at = strftime('%Y-%m-%d %H:%M:%S', strptime(row['Date'], '%Y/%m/%d %H:%M:%S')) except ValueError as ve: @@ -80,7 +81,7 @@ class Command(BaseCommand): except ValueError as ve: Command.datadog_error("Bad AuthorizationID value found for {}: message {}".format(client_authorization_id, ve), eacfile.name) - registration.confirmed_at = datetime.utcnow() + registration.confirmed_at = datetime.now(UTC) registration.save() except TestCenterRegistration.DoesNotExist: Command.datadog_error("Failed to find record for client_auth_id {}".format(client_authorization_id), eacfile.name) diff --git a/common/djangoapps/student/management/commands/pearson_make_tc_registration.py b/common/djangoapps/student/management/commands/pearson_make_tc_registration.py index b10cf143a0..50e56bb4be 100644 --- a/common/djangoapps/student/management/commands/pearson_make_tc_registration.py +++ b/common/djangoapps/student/management/commands/pearson_make_tc_registration.py @@ -1,5 +1,4 @@ from optparse import make_option -from time import strftime from django.contrib.auth.models import User from django.core.management.base import BaseCommand, CommandError @@ -128,8 +127,8 @@ class Command(BaseCommand): exam = CourseDescriptor.TestCenterExam(course_id, exam_name, exam_info) # update option values for date_first and date_last to use YYYY-MM-DD format # instead of YYYY-MM-DDTHH:MM - our_options['eligibility_appointment_date_first'] = strftime("%Y-%m-%d", exam.first_eligible_appointment_date) - our_options['eligibility_appointment_date_last'] = strftime("%Y-%m-%d", exam.last_eligible_appointment_date) + our_options['eligibility_appointment_date_first'] = exam.first_eligible_appointment_date.strftime("%Y-%m-%d") + our_options['eligibility_appointment_date_last'] = exam.last_eligible_appointment_date.strftime("%Y-%m-%d") if exam is None: raise CommandError("Exam for course_id {} does not exist".format(course_id)) diff --git a/common/djangoapps/student/models.py b/common/djangoapps/student/models.py index ab68b05f4b..af93c34317 100644 --- a/common/djangoapps/student/models.py +++ b/common/djangoapps/student/models.py @@ -16,7 +16,6 @@ import json import logging import uuid from random import randint -from time import strftime from django.conf import settings @@ -27,6 +26,7 @@ from django.dispatch import receiver from django.forms import ModelForm, forms import comment_client as cc +from pytz import UTC log = logging.getLogger(__name__) @@ -54,7 +54,7 @@ class UserProfile(models.Model): class Meta: db_table = "auth_userprofile" - ## CRITICAL TODO/SECURITY + # CRITICAL TODO/SECURITY # Sanitize all fields. # This is not visible to other users, but could introduce holes later user = models.OneToOneField(User, unique=True, db_index=True, related_name='profile') @@ -254,7 +254,7 @@ class TestCenterUserForm(ModelForm): def update_and_save(self): new_user = self.save(commit=False) # create additional values here: - new_user.user_updated_at = datetime.utcnow() + new_user.user_updated_at = datetime.now(UTC) new_user.upload_status = '' new_user.save() log.info("Updated demographic information for user's test center exam registration: username \"{}\" ".format(new_user.user.username)) @@ -429,8 +429,8 @@ class TestCenterRegistration(models.Model): registration.course_id = exam.course_id registration.accommodation_request = accommodation_request.strip() registration.exam_series_code = exam.exam_series_code - registration.eligibility_appointment_date_first = strftime("%Y-%m-%d", exam.first_eligible_appointment_date) - registration.eligibility_appointment_date_last = strftime("%Y-%m-%d", exam.last_eligible_appointment_date) + registration.eligibility_appointment_date_first = exam.first_eligible_appointment_date.strftime("%Y-%m-%d") + registration.eligibility_appointment_date_last = exam.last_eligible_appointment_date.strftime("%Y-%m-%d") registration.client_authorization_id = cls._create_client_authorization_id() # accommodation_code remains blank for now, along with Pearson confirmation information return registration @@ -556,7 +556,7 @@ class TestCenterRegistrationForm(ModelForm): def update_and_save(self): registration = self.save(commit=False) # create additional values here: - registration.user_updated_at = datetime.utcnow() + registration.user_updated_at = datetime.now(UTC) registration.upload_status = '' registration.save() log.info("Updated registration information for user's test center exam registration: username \"{}\" course \"{}\", examcode \"{}\"".format(registration.testcenter_user.user.username, registration.course_id, registration.exam_series_code)) @@ -598,7 +598,7 @@ def unique_id_for_user(user): return h.hexdigest() -## TODO: Should be renamed to generic UserGroup, and possibly +# TODO: Should be renamed to generic UserGroup, and possibly # Given an optional field for type of group class UserTestGroup(models.Model): users = models.ManyToManyField(User, db_index=True) @@ -626,7 +626,6 @@ class Registration(models.Model): def activate(self): self.user.is_active = True self.user.save() - #self.delete() class PendingNameChange(models.Model): @@ -648,7 +647,7 @@ class CourseEnrollment(models.Model): created = models.DateTimeField(auto_now_add=True, null=True, db_index=True) class Meta: - unique_together = (('user', 'course_id'), ) + unique_together = (('user', 'course_id'),) def __unicode__(self): return "[CourseEnrollment] %s: %s (%s)" % (self.user, self.course_id, self.created) @@ -667,12 +666,12 @@ class CourseEnrollmentAllowed(models.Model): created = models.DateTimeField(auto_now_add=True, null=True, db_index=True) class Meta: - unique_together = (('email', 'course_id'), ) + unique_together = (('email', 'course_id'),) def __unicode__(self): return "[CourseEnrollmentAllowed] %s: %s (%s)" % (self.email, self.course_id, self.created) -#cache_relation(User.profile) +# cache_relation(User.profile) #### Helper methods for use from python manage.py shell and other classes. diff --git a/common/djangoapps/student/tests/factories.py b/common/djangoapps/student/tests/factories.py index d73bb6f01d..49864fcbd4 100644 --- a/common/djangoapps/student/tests/factories.py +++ b/common/djangoapps/student/tests/factories.py @@ -5,6 +5,7 @@ from django.contrib.auth.models import Group from datetime import datetime from factory import DjangoModelFactory, SubFactory, PostGenerationMethodCall, post_generation, Sequence from uuid import uuid4 +from pytz import UTC # Factories don't have __init__ methods, and are self documenting # pylint: disable=W0232 @@ -46,8 +47,8 @@ class UserFactory(DjangoModelFactory): is_staff = False is_active = True is_superuser = False - last_login = datetime(2012, 1, 1) - date_joined = datetime(2011, 1, 1) + last_login = datetime(2012, 1, 1, tzinfo=UTC) + date_joined = datetime(2011, 1, 1, tzinfo=UTC) @post_generation def profile(obj, create, extracted, **kwargs): diff --git a/common/djangoapps/student/views.py b/common/djangoapps/student/views.py index 87e9f8c804..f129f1b4b1 100644 --- a/common/djangoapps/student/views.py +++ b/common/djangoapps/student/views.py @@ -49,6 +49,7 @@ from courseware.views import get_module_for_descriptor, jump_to from courseware.model_data import ModelDataCache from statsd import statsd +from pytz import UTC log = logging.getLogger("mitx.student") Article = namedtuple('Article', 'title url author image deck publication publish_date') @@ -77,7 +78,7 @@ def index(request, extra_context={}, user=None): ''' # The course selection work is done in courseware.courses. - domain = settings.MITX_FEATURES.get('FORCE_UNIVERSITY_DOMAIN') # normally False + domain = settings.MITX_FEATURES.get('FORCE_UNIVERSITY_DOMAIN') # normally False # do explicit check, because domain=None is valid if domain == False: domain = request.META.get('HTTP_HOST') @@ -630,7 +631,7 @@ def create_account(request, post_override=None): # Ok, looks like everything is legit. Create the account. ret = _do_create_account(post_vars) - if isinstance(ret, HttpResponse): # if there was an error then return that + if isinstance(ret, HttpResponse): # if there was an error then return that return ret (user, profile, registration) = ret @@ -668,7 +669,7 @@ def create_account(request, post_override=None): if DoExternalAuth: eamap.user = login_user - eamap.dtsignup = datetime.datetime.now() + eamap.dtsignup = datetime.datetime.now(UTC) eamap.save() log.debug('Updated ExternalAuthMap for %s to be %s' % (post_vars['username'], eamap)) diff --git a/common/djangoapps/terrain/ui_helpers.py b/common/djangoapps/terrain/ui_helpers.py index ecd43eb719..b1c5f30467 100644 --- a/common/djangoapps/terrain/ui_helpers.py +++ b/common/djangoapps/terrain/ui_helpers.py @@ -3,6 +3,7 @@ from lettuce import world import time +import platform from urllib import quote_plus from selenium.common.exceptions import WebDriverException, StaleElementReferenceException from selenium.webdriver.support import expected_conditions as EC @@ -57,20 +58,28 @@ def css_find(css, wait_time=5): @world.absorb -def css_click(css_selector): +def css_click(css_selector, index=0, attempts=5): """ Perform a click on a CSS selector, retrying if it initially fails + This function will return if the click worked (since it is try/excepting all errors) """ assert is_css_present(css_selector) - try: - world.browser.find_by_css(css_selector).click() - - except WebDriverException: - # Occassionally, MathJax or other JavaScript can cover up - # an element temporarily. - # If this happens, wait a second, then try again - world.wait(1) - world.browser.find_by_css(css_selector).click() + attempt = 0 + result = False + while attempt < attempts: + try: + world.css_find(css_selector)[index].click() + result = True + break + except WebDriverException: + # Occasionally, MathJax or other JavaScript can cover up + # an element temporarily. + # If this happens, wait a second, then try again + world.wait(1) + attempt += 1 + except: + attempt += 1 + return result @world.absorb @@ -158,3 +167,8 @@ def click_tools(): tools_css = 'li.nav-course-tools' if world.browser.is_element_present_by_css(tools_css): world.css_click(tools_css) + + +@world.absorb +def is_mac(): + return platform.mac_ver()[0] is not '' diff --git a/common/djangoapps/tests.py b/common/djangoapps/tests.py new file mode 100644 index 0000000000..8e78ee7f37 --- /dev/null +++ b/common/djangoapps/tests.py @@ -0,0 +1,49 @@ +''' +Created on Jun 6, 2013 + +@author: dmitchell +''' +from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory +from student.tests.factories import AdminFactory +from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase +import xmodule_modifiers +import datetime +from pytz import UTC +from xmodule.modulestore.tests import factories + +class TestXmoduleModfiers(ModuleStoreTestCase): + + # FIXME disabled b/c start date inheritance is not occuring and render_... in get_html is failing due + # to middleware.lookup['main'] not being defined + def _test_add_histogram(self): + instructor = AdminFactory.create() + self.client.login(username=instructor.username, password='test') + + course = CourseFactory.create(org='test', + number='313', display_name='histogram test') + section = ItemFactory.create( + parent_location=course.location, display_name='chapter hist', + template='i4x://edx/templates/chapter/Empty') + problem = ItemFactory.create( + parent_location=section.location, display_name='problem hist 1', + template='i4x://edx/templates/problem/Blank_Common_Problem') + problem.has_score = False # don't trip trying to retrieve db data + + late_problem = ItemFactory.create( + parent_location=section.location, display_name='problem hist 2', + template='i4x://edx/templates/problem/Blank_Common_Problem') + late_problem.lms.start = datetime.datetime.now(UTC) + datetime.timedelta(days=32) + late_problem.has_score = False + + + problem_module = factories.get_test_xmodule_for_descriptor(problem) + problem_module.get_html = xmodule_modifiers.add_histogram(lambda:'', problem_module, instructor) + + self.assertRegexpMatches( + problem_module.get_html(), r'.*Not yet.*') + + problem_module = factories.get_test_xmodule_for_descriptor(late_problem) + problem_module.get_html = xmodule_modifiers.add_histogram(lambda: '', problem_module, instructor) + + self.assertRegexpMatches( + problem_module.get_html(), r'.*Yes!.*') diff --git a/common/djangoapps/track/views.py b/common/djangoapps/track/views.py index ae3a1dcb3e..221bab5468 100644 --- a/common/djangoapps/track/views.py +++ b/common/djangoapps/track/views.py @@ -1,19 +1,18 @@ import json import logging -import os import pytz import datetime import dateutil.parser from django.contrib.auth.decorators import login_required from django.http import HttpResponse -from django.http import Http404 from django.shortcuts import redirect from django.conf import settings from mitxmako.shortcuts import render_to_response from django_future.csrf import ensure_csrf_cookie from track.models import TrackingLog +from pytz import UTC log = logging.getLogger("tracking") @@ -21,6 +20,7 @@ LOGFIELDS = ['username', 'ip', 'event_source', 'event_type', 'event', 'agent', ' def log_event(event): + """Write tracking event to log file, and optionally to TrackingLog model.""" event_str = json.dumps(event) log.info(event_str[:settings.TRACK_MAX_EVENT]) if settings.MITX_FEATURES.get('ENABLE_SQL_TRACKING_LOGS'): @@ -33,6 +33,11 @@ def log_event(event): def user_track(request): + """ + Log when GET call to "event" URL is made by a user. + + GET call should provide "event_type", "event", and "page" arguments. + """ try: # TODO: Do the same for many of the optional META parameters username = request.user.username except: @@ -49,7 +54,6 @@ def user_track(request): except: agent = '' - # TODO: Move a bunch of this into log_event event = { "username": username, "session": scookie, @@ -59,7 +63,7 @@ def user_track(request): "event": request.GET['event'], "agent": agent, "page": request.GET['page'], - "time": datetime.datetime.utcnow().isoformat(), + "time": datetime.datetime.now(UTC).isoformat(), "host": request.META['SERVER_NAME'], } log_event(event) @@ -67,6 +71,7 @@ def user_track(request): def server_track(request, event_type, event, page=None): + """Log events related to server requests.""" try: username = request.user.username except: @@ -85,18 +90,61 @@ def server_track(request, event_type, event, page=None): "event": event, "agent": agent, "page": page, - "time": datetime.datetime.utcnow().isoformat(), + "time": datetime.datetime.now(UTC).isoformat(), "host": request.META['SERVER_NAME'], } - if event_type.startswith("/event_logs") and request.user.is_staff: # don't log + if event_type.startswith("/event_logs") and request.user.is_staff: # don't log return log_event(event) +def task_track(request_info, task_info, event_type, event, page=None): + """ + Logs tracking information for events occuring within celery tasks. + + The `event_type` is a string naming the particular event being logged, + while `event` is a dict containing whatever additional contextual information + is desired. + + The `request_info` is a dict containing information about the original + task request. Relevant keys are `username`, `ip`, `agent`, and `host`. + While the dict is required, the values in it are not, so that {} can be + passed in. + + In addition, a `task_info` dict provides more information about the current + task, to be stored with the `event` dict. This may also be an empty dict. + + The `page` parameter is optional, and allows the name of the page to + be provided. + """ + + # supplement event information with additional information + # about the task in which it is running. + full_event = dict(event, **task_info) + + # All fields must be specified, in case the tracking information is + # also saved to the TrackingLog model. Get values from the task-level + # information, or just add placeholder values. + event = { + "username": request_info.get('username', 'unknown'), + "ip": request_info.get('ip', 'unknown'), + "event_source": "task", + "event_type": event_type, + "event": full_event, + "agent": request_info.get('agent', 'unknown'), + "page": page, + "time": datetime.datetime.utcnow().isoformat(), + "host": request_info.get('host', 'unknown') + } + + log_event(event) + + @login_required @ensure_csrf_cookie def view_tracking_log(request, args=''): + """View to output contents of TrackingLog model. For staff use only.""" if not request.user.is_staff: return redirect('/') nlen = 100 diff --git a/common/djangoapps/util/testing.py b/common/djangoapps/util/testing.py new file mode 100644 index 0000000000..d33f1c8f8b --- /dev/null +++ b/common/djangoapps/util/testing.py @@ -0,0 +1,34 @@ +import sys + +from django.conf import settings +from django.core.urlresolvers import clear_url_caches + + +class UrlResetMixin(object): + """Mixin to reset urls.py before and after a test + + Django memoizes the function that reads the urls module (whatever module + urlconf names). The module itself is also stored by python in sys.modules. + To fully reload it, we need to reload the python module, and also clear django's + cache of the parsed urls. + + However, the order in which we do this doesn't matter, because neither one will + get reloaded until the next request + + Doing this is expensive, so it should only be added to tests that modify settings + that affect the contents of urls.py + """ + + def _reset_urls(self, urlconf=None): + if urlconf is None: + urlconf = settings.ROOT_URLCONF + + if urlconf in sys.modules: + reload(sys.modules[urlconf]) + clear_url_caches() + + def setUp(self): + """Reset django default urlconf before tests and after tests""" + super(UrlResetMixin, self).setUp() + self._reset_urls() + self.addCleanup(self._reset_urls) diff --git a/common/djangoapps/xmodule_modifiers.py b/common/djangoapps/xmodule_modifiers.py index 45691cd854..570b38c942 100644 --- a/common/djangoapps/xmodule_modifiers.py +++ b/common/djangoapps/xmodule_modifiers.py @@ -1,7 +1,6 @@ import re import json import logging -import time import static_replace from django.conf import settings @@ -9,6 +8,8 @@ from functools import wraps from mitxmako.shortcuts import render_to_string from xmodule.seq_module import SequenceModule from xmodule.vertical_module import VerticalModule +import datetime +from django.utils.timezone import UTC log = logging.getLogger("mitx.xmodule_modifiers") @@ -83,7 +84,7 @@ def grade_histogram(module_id): cursor.execute(q, [module_id]) grades = list(cursor.fetchall()) - grades.sort(key=lambda x: x[0]) # Add ORDER BY to sql query? + grades.sort(key=lambda x: x[0]) # Add ORDER BY to sql query? if len(grades) >= 1 and grades[0][0] is None: return [] return grades @@ -101,7 +102,7 @@ def add_histogram(get_html, module, user): @wraps(get_html) def _get_html(): - if type(module) in [SequenceModule, VerticalModule]: # TODO: make this more general, eg use an XModule attribute instead + if type(module) in [SequenceModule, VerticalModule]: # TODO: make this more general, eg use an XModule attribute instead return get_html() module_id = module.id @@ -132,7 +133,7 @@ def add_histogram(get_html, module, user): # useful to indicate to staff if problem has been released or not # TODO (ichuang): use _has_access_descriptor.can_load in lms.courseware.access, instead of now>mstart comparison here - now = time.gmtime() + now = datetime.datetime.now(UTC()) is_released = "unknown" mstart = module.descriptor.lms.start diff --git a/common/lib/calc/calc.py b/common/lib/calc/calc.py index 2ee82e2fb4..f0934a9ed5 100644 --- a/common/lib/calc/calc.py +++ b/common/lib/calc/calc.py @@ -1,34 +1,63 @@ +""" +Parser and evaluator for FormulaResponse and NumericalResponse + +Uses pyparsing to parse. Main function as of now is evaluator(). +""" + import copy -import logging import math import operator import re import numpy -import numbers import scipy.constants +import calcfunctions -from pyparsing import Word, alphas, nums, oneOf, Literal -from pyparsing import ZeroOrMore, OneOrMore, StringStart -from pyparsing import StringEnd, Optional, Forward -from pyparsing import CaselessLiteral, Group, StringEnd -from pyparsing import NoMatch, stringEnd, alphanums +# have numpy raise errors on functions outside its domain +# See http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html +numpy.seterr(all='ignore') # Also: 'ignore', 'warn' (default), 'raise' -default_functions = {'sin': numpy.sin, +from pyparsing import (Word, nums, Literal, + ZeroOrMore, MatchFirst, + Optional, Forward, + CaselessLiteral, + stringEnd, Suppress, Combine) + +DEFAULT_FUNCTIONS = {'sin': numpy.sin, 'cos': numpy.cos, 'tan': numpy.tan, + 'sec': calcfunctions.sec, + 'csc': calcfunctions.csc, + 'cot': calcfunctions.cot, 'sqrt': numpy.sqrt, 'log10': numpy.log10, 'log2': numpy.log2, 'ln': numpy.log, + 'exp': numpy.exp, 'arccos': numpy.arccos, 'arcsin': numpy.arcsin, 'arctan': numpy.arctan, + 'arcsec': calcfunctions.arcsec, + 'arccsc': calcfunctions.arccsc, + 'arccot': calcfunctions.arccot, 'abs': numpy.abs, 'fact': math.factorial, - 'factorial': math.factorial + 'factorial': math.factorial, + 'sinh': numpy.sinh, + 'cosh': numpy.cosh, + 'tanh': numpy.tanh, + 'sech': calcfunctions.sech, + 'csch': calcfunctions.csch, + 'coth': calcfunctions.coth, + 'arcsinh': numpy.arcsinh, + 'arccosh': numpy.arccosh, + 'arctanh': numpy.arctanh, + 'arcsech': calcfunctions.arcsech, + 'arccsch': calcfunctions.arccsch, + 'arccoth': calcfunctions.arccoth } -default_variables = {'j': numpy.complex(0, 1), +DEFAULT_VARIABLES = {'i': numpy.complex(0, 1), + 'j': numpy.complex(0, 1), 'e': numpy.e, 'pi': numpy.pi, 'k': scipy.constants.k, @@ -37,65 +66,166 @@ default_variables = {'j': numpy.complex(0, 1), 'q': scipy.constants.e } -log = logging.getLogger("mitx.courseware.capa") +# We eliminated the following extreme suffixes: +# P (1e15), E (1e18), Z (1e21), Y (1e24), +# f (1e-15), a (1e-18), z (1e-21), y (1e-24) +# since they're rarely used, and potentially +# confusing. They may also conflict with variables if we ever allow e.g. +# 5R instead of 5*R +SUFFIXES = {'%': 0.01, 'k': 1e3, 'M': 1e6, 'G': 1e9, 'T': 1e12, + 'c': 1e-2, 'm': 1e-3, 'u': 1e-6, 'n': 1e-9, 'p': 1e-12} class UndefinedVariable(Exception): - def raiseself(self): - ''' Helper so we can use inside of a lambda ''' - raise self - - -general_whitespace = re.compile('[^\w]+') + """ + Used to indicate the student input of a variable, which was unused by the + instructor. + """ + pass def check_variables(string, variables): - '''Confirm the only variables in string are defined. + """ + Confirm the only variables in string are defined. - Pyparsing uses a left-to-right parser, which makes the more + Otherwise, raise an UndefinedVariable containing all bad variables. + + Pyparsing uses a left-to-right parser, which makes a more elegant approach pretty hopeless. - - achar = reduce(lambda a,b:a|b ,map(Literal,alphas)) # Any alphabetic character - undefined_variable = achar + Word(alphanums) - undefined_variable.setParseAction(lambda x:UndefinedVariable("".join(x)).raiseself()) - varnames = varnames | undefined_variable - ''' - possible_variables = re.split(general_whitespace, string) # List of all alnums in string - bad_variables = list() - for v in possible_variables: - if len(v) == 0: + """ + general_whitespace = re.compile('[^\\w]+') + # List of all alnums in string + possible_variables = re.split(general_whitespace, string) + bad_variables = [] + for var in possible_variables: + if len(var) == 0: continue - if v[0] <= '9' and '0' <= 'v': # Skip things that begin with numbers + if var[0].isdigit(): # Skip things that begin with numbers continue - if v not in variables: - bad_variables.append(v) + if var not in variables: + bad_variables.append(var) if len(bad_variables) > 0: raise UndefinedVariable(' '.join(bad_variables)) +def lower_dict(input_dict): + """ + takes each key in the dict and makes it lowercase, still mapping to the + same value. + + keep in mind that it is possible (but not useful?) to define different + variables that have the same lowercase representation. It would be hard to + tell which is used in the final dict and which isn't. + """ + return {k.lower(): v for k, v in input_dict.iteritems()} + + +# The following few functions define parse actions, which are run on lists of +# results from each parse component. They convert the strings and (previously +# calculated) numbers into the number that component represents. + +def super_float(text): + """ + Like float, but with si extensions. 1k goes to 1000 + """ + if text[-1] in SUFFIXES: + return float(text[:-1]) * SUFFIXES[text[-1]] + else: + return float(text) + + +def number_parse_action(parse_result): + """ + Create a float out of its string parts + + e.g. [ '7', '.', '13' ] -> [ 7.13 ] + Calls super_float above + """ + return super_float("".join(parse_result)) + + +def exp_parse_action(parse_result): + """ + Take a list of numbers and exponentiate them, right to left + + e.g. [ 3, 2, 3 ] (which is 3^2^3 = 3^(2^3)) -> 6561 + """ + # pyparsing.ParseResults doesn't play well with reverse() + parse_result = reversed(parse_result) + # the result of an exponentiation is called a power + power = reduce(lambda a, b: b ** a, parse_result) + return power + + +def parallel(parse_result): + """ + Compute numbers according to the parallel resistors operator + + BTW it is commutative. Its formula is given by + out = 1 / (1/in1 + 1/in2 + ...) + e.g. [ 1, 2 ] => 2/3 + + Return NaN if there is a zero among the inputs + """ + # convert from pyparsing.ParseResults, which doesn't support '0 in parse_result' + parse_result = parse_result.asList() + if len(parse_result) == 1: + return parse_result[0] + if 0 in parse_result: + return float('nan') + reciprocals = [1. / e for e in parse_result] + return 1. / sum(reciprocals) + + +def sum_parse_action(parse_result): + """ + Add the inputs + + [ 1, '+', 2, '-', 3 ] -> 0 + + Allow a leading + or - + """ + total = 0.0 + current_op = operator.add + for token in parse_result: + if token is '+': + current_op = operator.add + elif token is '-': + current_op = operator.sub + else: + total = current_op(total, token) + return total + + +def prod_parse_action(parse_result): + """ + Multiply the inputs + + [ 1, '*', 2, '/', 3 ] => 0.66 + """ + prod = 1.0 + current_op = operator.mul + for token in parse_result: + if token is '*': + current_op = operator.mul + elif token is '/': + current_op = operator.truediv + else: + prod = current_op(prod, token) + return prod + + def evaluator(variables, functions, string, cs=False): - ''' + """ Evaluate an expression. Variables are passed as a dictionary from string to value. Unary functions are passed as a dictionary from string to function. Variables must be floats. cs: Case sensitive - TODO: Fix it so we can pass integers and complex numbers in variables dict - ''' - # log.debug("variables: {0}".format(variables)) - # log.debug("functions: {0}".format(functions)) - # log.debug("string: {0}".format(string)) - - def lower_dict(d): - return dict([(k.lower(), d[k]) for k in d]) - - all_variables = copy.copy(default_variables) - all_functions = copy.copy(default_functions) - - if not cs: - all_variables = lower_dict(all_variables) - all_functions = lower_dict(all_functions) + """ + all_variables = copy.copy(DEFAULT_VARIABLES) + all_functions = copy.copy(DEFAULT_FUNCTIONS) all_variables.update(variables) all_functions.update(functions) @@ -113,122 +243,59 @@ def evaluator(variables, functions, string, cs=False): if string.strip() == "": return float('nan') - ops = {"^": operator.pow, - "*": operator.mul, - "/": operator.truediv, - "+": operator.add, - "-": operator.sub, - } - # We eliminated extreme ones, since they're rarely used, and potentially - # confusing. They may also conflict with variables if we ever allow e.g. - # 5R instead of 5*R - suffixes = {'%': 0.01, 'k': 1e3, 'M': 1e6, 'G': 1e9, - 'T': 1e12, # 'P':1e15,'E':1e18,'Z':1e21,'Y':1e24, - 'c': 1e-2, 'm': 1e-3, 'u': 1e-6, - 'n': 1e-9, 'p': 1e-12} # ,'f':1e-15,'a':1e-18,'z':1e-21,'y':1e-24} - - def super_float(text): - ''' Like float, but with si extensions. 1k goes to 1000''' - if text[-1] in suffixes: - return float(text[:-1]) * suffixes[text[-1]] - else: - return float(text) - - def number_parse_action(x): # [ '7' ] -> [ 7 ] - return [super_float("".join(x))] - - def exp_parse_action(x): # [ 2 ^ 3 ^ 2 ] -> 512 - x = [e for e in x if isinstance(e, numbers.Number)] # Ignore ^ - x.reverse() - x = reduce(lambda a, b: b ** a, x) - return x - - def parallel(x): # Parallel resistors [ 1 2 ] => 2/3 - # convert from pyparsing.ParseResults, which doesn't support '0 in x' - x = list(x) - if len(x) == 1: - return x[0] - if 0 in x: - return float('nan') - x = [1. / e for e in x if isinstance(e, numbers.Number)] # Ignore || - return 1. / sum(x) - - def sum_parse_action(x): # [ 1 + 2 - 3 ] -> 0 - total = 0.0 - op = ops['+'] - for e in x: - if e in set('+-'): - op = ops[e] - else: - total = op(total, e) - return total - - def prod_parse_action(x): # [ 1 * 2 / 3 ] => 0.66 - prod = 1.0 - op = ops['*'] - for e in x: - if e in set('*/'): - op = ops[e] - else: - prod = op(prod, e) - return prod - - def func_parse_action(x): - return [all_functions[x[0]](x[1])] - # SI suffixes and percent - number_suffix = reduce(lambda a, b: a | b, map(Literal, suffixes.keys()), NoMatch()) - (dot, minus, plus, times, div, lpar, rpar, exp) = map(Literal, ".-+*/()^") + number_suffix = MatchFirst([Literal(k) for k in SUFFIXES.keys()]) + plus_minus = Literal('+') | Literal('-') + times_div = Literal('*') | Literal('/') number_part = Word(nums) # 0.33 or 7 or .34 or 16. inner_number = (number_part + Optional("." + Optional(number_part))) | ("." + number_part) + # by default pyparsing allows spaces between tokens--Combine prevents that + inner_number = Combine(inner_number) # 0.33k or -17 - number = (Optional(minus | plus) + inner_number - + Optional(CaselessLiteral("E") + Optional((plus | minus)) + number_part) + number = (inner_number + + Optional(CaselessLiteral("E") + Optional(plus_minus) + number_part) + Optional(number_suffix)) - number = number.setParseAction(number_parse_action) # Convert to number + number.setParseAction(number_parse_action) # Convert to number # Predefine recursive variables expr = Forward() - factor = Forward() - def sreduce(f, l): - ''' Same as reduce, but handle len 1 and len 0 lists sensibly ''' - if len(l) == 0: - return NoMatch() - if len(l) == 1: - return l[0] - return reduce(f, l) + # Handle variables passed in. + # E.g. if we have {'R':0.5}, we make the substitution. + # We sort the list so that var names (like "e2") match before + # mathematical constants (like "e"). This is kind of a hack. + all_variables_keys = sorted(all_variables.keys(), key=len, reverse=True) + varnames = MatchFirst([CasedLiteral(k) for k in all_variables_keys]) + varnames.setParseAction( + lambda x: [all_variables[k] for k in x] + ) - # Handle variables passed in. E.g. if we have {'R':0.5}, we make the substitution. - # Special case for no variables because of how we understand PyParsing is put together - if len(all_variables) > 0: - # We sort the list so that var names (like "e2") match before - # mathematical constants (like "e"). This is kind of a hack. - all_variables_keys = sorted(all_variables.keys(), key=len, reverse=True) - varnames = sreduce(lambda x, y: x | y, map(lambda x: CasedLiteral(x), all_variables_keys)) - varnames.setParseAction(lambda x: map(lambda y: all_variables[y], x)) - else: - varnames = NoMatch() + # if all_variables were empty, then pyparsing wants + # varnames = NoMatch() + # this is not the case, as all_variables contains the defaults # Same thing for functions. - if len(all_functions) > 0: - funcnames = sreduce(lambda x, y: x | y, - map(lambda x: CasedLiteral(x), all_functions.keys())) - function = funcnames + lpar.suppress() + expr + rpar.suppress() - function.setParseAction(func_parse_action) - else: - function = NoMatch() + all_functions_keys = sorted(all_functions.keys(), key=len, reverse=True) + funcnames = MatchFirst([CasedLiteral(k) for k in all_functions_keys]) + function = funcnames + Suppress("(") + expr + Suppress(")") + function.setParseAction( + lambda x: [all_functions[x[0]](x[1])] + ) - atom = number | function | varnames | lpar + expr + rpar - factor << (atom + ZeroOrMore(exp + atom)).setParseAction(exp_parse_action) # 7^6 - paritem = factor + ZeroOrMore(Literal('||') + factor) # 5k || 4k - paritem = paritem.setParseAction(parallel) - term = paritem + ZeroOrMore((times | div) + paritem) # 7 * 5 / 4 - 3 - term = term.setParseAction(prod_parse_action) - expr << Optional((plus | minus)) + term + ZeroOrMore((plus | minus) + term) # -5 + 4 - 3 - expr = expr.setParseAction(sum_parse_action) + atom = number | function | varnames | Suppress("(") + expr + Suppress(")") + + # Do the following in the correct order to preserve order of operation + pow_term = atom + ZeroOrMore(Suppress("^") + atom) + pow_term.setParseAction(exp_parse_action) # 7^6 + par_term = pow_term + ZeroOrMore(Suppress('||') + pow_term) # 5k || 4k + par_term.setParseAction(parallel) + prod_term = par_term + ZeroOrMore(times_div + par_term) # 7 * 5 / 4 - 3 + prod_term.setParseAction(prod_parse_action) + sum_term = Optional(plus_minus) + prod_term + ZeroOrMore(plus_minus + prod_term) # -5 + 4 - 3 + sum_term.setParseAction(sum_parse_action) + expr << sum_term # finish the recursion return (expr + stringEnd).parseString(string)[0] diff --git a/common/lib/calc/calcfunctions.py b/common/lib/calc/calcfunctions.py new file mode 100644 index 0000000000..d0ac4f7a3d --- /dev/null +++ b/common/lib/calc/calcfunctions.py @@ -0,0 +1,99 @@ +""" +Provide the mathematical functions that numpy doesn't. + +Specifically, the secant/cosecant/cotangents and their inverses and +hyperbolic counterparts +""" +import numpy + + +# Normal Trig +def sec(arg): + """ + Secant + """ + return 1 / numpy.cos(arg) + + +def csc(arg): + """ + Cosecant + """ + return 1 / numpy.sin(arg) + + +def cot(arg): + """ + Cotangent + """ + return 1 / numpy.tan(arg) + + +# Inverse Trig +# http://en.wikipedia.org/wiki/Inverse_trigonometric_functions#Relationships_among_the_inverse_trigonometric_functions +def arcsec(val): + """ + Inverse secant + """ + return numpy.arccos(1. / val) + + +def arccsc(val): + """ + Inverse cosecant + """ + return numpy.arcsin(1. / val) + + +def arccot(val): + """ + Inverse cotangent + """ + if numpy.real(val) < 0: + return -numpy.pi / 2 - numpy.arctan(val) + else: + return numpy.pi / 2 - numpy.arctan(val) + + +# Hyperbolic Trig +def sech(arg): + """ + Hyperbolic secant + """ + return 1 / numpy.cosh(arg) + + +def csch(arg): + """ + Hyperbolic cosecant + """ + return 1 / numpy.sinh(arg) + + +def coth(arg): + """ + Hyperbolic cotangent + """ + return 1 / numpy.tanh(arg) + + +# And their inverses +def arcsech(val): + """ + Inverse hyperbolic secant + """ + return numpy.arccosh(1. / val) + + +def arccsch(val): + """ + Inverse hyperbolic cosecant + """ + return numpy.arcsinh(1. / val) + + +def arccoth(val): + """ + Inverse hyperbolic cotangent + """ + return numpy.arctanh(1. / val) diff --git a/common/lib/calc/tests/test_calc.py b/common/lib/calc/tests/test_calc.py index cfa1b7525d..13cd9e9471 100644 --- a/common/lib/calc/tests/test_calc.py +++ b/common/lib/calc/tests/test_calc.py @@ -194,6 +194,105 @@ class EvaluatorTest(unittest.TestCase): arctan_angles = arcsin_angles self.assert_function_values('arctan', arctan_inputs, arctan_angles) + def test_reciprocal_trig_functions(self): + """ + Test the reciprocal trig functions provided in calc.py + + which are: sec, csc, cot, arcsec, arccsc, arccot + """ + angles = ['-pi/4', 'pi/6', 'pi/5', '5*pi/4', '9*pi/4', '1 + j'] + sec_values = [1.414, 1.155, 1.236, -1.414, 1.414, 0.498 + 0.591j] + csc_values = [-1.414, 2, 1.701, -1.414, 1.414, 0.622 - 0.304j] + cot_values = [-1, 1.732, 1.376, 1, 1, 0.218 - 0.868j] + + self.assert_function_values('sec', angles, sec_values) + self.assert_function_values('csc', angles, csc_values) + self.assert_function_values('cot', angles, cot_values) + + arcsec_inputs = ['1.1547', '1.2361', '2', '-2', '-1.4142', '0.4983+0.5911*j'] + arcsec_angles = [0.524, 0.628, 1.047, 2.094, 2.356, 1 + 1j] + self.assert_function_values('arcsec', arcsec_inputs, arcsec_angles) + + arccsc_inputs = ['-1.1547', '-1.4142', '2', '1.7013', '1.1547', '0.6215-0.3039*j'] + arccsc_angles = [-1.047, -0.785, 0.524, 0.628, 1.047, 1 + 1j] + self.assert_function_values('arccsc', arccsc_inputs, arccsc_angles) + + # Has the same range as arccsc + arccot_inputs = ['-0.5774', '-1', '1.7321', '1.3764', '0.5774', '(0.2176-0.868*j)'] + arccot_angles = arccsc_angles + self.assert_function_values('arccot', arccot_inputs, arccot_angles) + + def test_hyperbolic_functions(self): + """ + Test the hyperbolic functions + + which are: sinh, cosh, tanh, sech, csch, coth + """ + inputs = ['0', '0.5', '1', '2', '1+j'] + neg_inputs = ['0', '-0.5', '-1', '-2', '-1-j'] + negate = lambda x: [-k for k in x] + + # sinh is odd + sinh_vals = [0, 0.521, 1.175, 3.627, 0.635 + 1.298j] + self.assert_function_values('sinh', inputs, sinh_vals) + self.assert_function_values('sinh', neg_inputs, negate(sinh_vals)) + + # cosh is even - do not negate + cosh_vals = [1, 1.128, 1.543, 3.762, 0.834 + 0.989j] + self.assert_function_values('cosh', inputs, cosh_vals) + self.assert_function_values('cosh', neg_inputs, cosh_vals) + + # tanh is odd + tanh_vals = [0, 0.462, 0.762, 0.964, 1.084 + 0.272j] + self.assert_function_values('tanh', inputs, tanh_vals) + self.assert_function_values('tanh', neg_inputs, negate(tanh_vals)) + + # sech is even - do not negate + sech_vals = [1, 0.887, 0.648, 0.266, 0.498 - 0.591j] + self.assert_function_values('sech', inputs, sech_vals) + self.assert_function_values('sech', neg_inputs, sech_vals) + + # the following functions do not have 0 in their domain + inputs = inputs[1:] + neg_inputs = neg_inputs[1:] + + # csch is odd + csch_vals = [1.919, 0.851, 0.276, 0.304 - 0.622j] + self.assert_function_values('csch', inputs, csch_vals) + self.assert_function_values('csch', neg_inputs, negate(csch_vals)) + + # coth is odd + coth_vals = [2.164, 1.313, 1.037, 0.868 - 0.218j] + self.assert_function_values('coth', inputs, coth_vals) + self.assert_function_values('coth', neg_inputs, negate(coth_vals)) + + def test_hyperbolic_inverses(self): + """ + Test the inverse hyperbolic functions + + which are of the form arc[X]h + """ + results = [0, 0.5, 1, 2, 1 + 1j] + + sinh_vals = ['0', '0.5211', '1.1752', '3.6269', '0.635+1.2985*j'] + self.assert_function_values('arcsinh', sinh_vals, results) + + cosh_vals = ['1', '1.1276', '1.5431', '3.7622', '0.8337+0.9889*j'] + self.assert_function_values('arccosh', cosh_vals, results) + + tanh_vals = ['0', '0.4621', '0.7616', '0.964', '1.0839+0.2718*j'] + self.assert_function_values('arctanh', tanh_vals, results) + + sech_vals = ['1.0', '0.8868', '0.6481', '0.2658', '0.4983-0.5911*j'] + self.assert_function_values('arcsech', sech_vals, results) + + results = results[1:] + csch_vals = ['1.919', '0.8509', '0.2757', '0.3039-0.6215*j'] + self.assert_function_values('arccsch', csch_vals, results) + + coth_vals = ['2.164', '1.313', '1.0373', '0.868-0.2176*j'] + self.assert_function_values('arccoth', coth_vals, results) + def test_other_functions(self): """ Test the non-trig functions provided in calc.py diff --git a/common/lib/capa/capa/capa_problem.py b/common/lib/capa/capa/capa_problem.py index 150b3b3c9b..2a9f3d82a3 100644 --- a/common/lib/capa/capa/capa_problem.py +++ b/common/lib/capa/capa/capa_problem.py @@ -15,25 +15,22 @@ This is used by capa_module. from datetime import datetime import logging -import math -import numpy import os.path import re -import sys from lxml import etree from xml.sax.saxutils import unescape from copy import deepcopy -from .correctmap import CorrectMap -import inputtypes -import customrender -from .util import contextualize_text, convert_files_to_filenames -import xqueue_interface +from capa.correctmap import CorrectMap +import capa.inputtypes as inputtypes +import capa.customrender as customrender +from capa.util import contextualize_text, convert_files_to_filenames +import capa.xqueue_interface as xqueue_interface # to be replaced with auto-registering -import responsetypes -import safe_exec +import capa.responsetypes as responsetypes +from capa.safe_exec import safe_exec # dict of tagname, Response Class -- this should come from auto-registering response_tag_dict = dict([(x.response_tag, x) for x in responsetypes.__all__]) @@ -46,8 +43,8 @@ response_properties = ["codeparam", "responseparam", "answer", "openendedparam"] # special problem tags which should be turned into innocuous HTML html_transforms = {'problem': {'tag': 'div'}, - "text": {'tag': 'span'}, - "math": {'tag': 'span'}, + 'text': {'tag': 'span'}, + 'math': {'tag': 'span'}, } # These should be removed from HTML output, including all subelements @@ -134,7 +131,6 @@ class LoncapaProblem(object): self.extracted_tree = self._extract_html(self.tree) - def do_reset(self): ''' Reset internal state to unfinished, with no answers @@ -175,7 +171,7 @@ class LoncapaProblem(object): Return the maximum score for this problem. ''' maxscore = 0 - for response, responder in self.responders.iteritems(): + for responder in self.responders.values(): maxscore += responder.get_max_score() return maxscore @@ -220,7 +216,7 @@ class LoncapaProblem(object): def ungraded_response(self, xqueue_msg, queuekey): ''' Handle any responses from the xqueue that do not contain grades - Will try to pass the queue message to all inputtypes that can handle ungraded responses + Will try to pass the queue message to all inputtypes that can handle ungraded responses Does not return any value ''' @@ -230,7 +226,6 @@ class LoncapaProblem(object): if hasattr(the_input, 'ungraded_response'): the_input.ungraded_response(xqueue_msg, queuekey) - def is_queued(self): ''' Returns True if any part of the problem has been submitted to an external queue @@ -238,7 +233,6 @@ class LoncapaProblem(object): ''' return any(self.correct_map.is_queued(answer_id) for answer_id in self.correct_map) - def get_recentmost_queuetime(self): ''' Returns a DateTime object that represents the timestamp of the most recent @@ -256,11 +250,11 @@ class LoncapaProblem(object): return max(queuetimes) - def grade_answers(self, answers): ''' Grade student responses. Called by capa_module.check_problem. - answers is a dict of all the entries from request.POST, but with the first part + + `answers` is a dict of all the entries from request.POST, but with the first part of each key removed (the string before the first "_"). Thus, for example, input_ID123 -> ID123, and input_fromjs_ID123 -> fromjs_ID123 @@ -270,24 +264,72 @@ class LoncapaProblem(object): # if answers include File objects, convert them to filenames. self.student_answers = convert_files_to_filenames(answers) + return self._grade_answers(answers) + def supports_rescoring(self): + """ + Checks that the current problem definition permits rescoring. + + More precisely, it checks that there are no response types in + the current problem that are not fully supported (yet) for rescoring. + + This includes responsetypes for which the student's answer + is not properly stored in state, i.e. file submissions. At present, + we have no way to know if an existing response was actually a real + answer or merely the filename of a file submitted as an answer. + + It turns out that because rescoring is a background task, limiting + it to responsetypes that don't support file submissions also means + that the responsetypes are synchronous. This is convenient as it + permits rescoring to be complete when the rescoring call returns. + """ + return all('filesubmission' not in responder.allowed_inputfields for responder in self.responders.values()) + + def rescore_existing_answers(self): + """ + Rescore student responses. Called by capa_module.rescore_problem. + """ + return self._grade_answers(None) + + def _grade_answers(self, student_answers): + """ + Internal grading call used for checking new 'student_answers' and also + rescoring existing student_answers. + + For new student_answers being graded, `student_answers` is a dict of all the + entries from request.POST, but with the first part of each key removed + (the string before the first "_"). Thus, for example, + input_ID123 -> ID123, and input_fromjs_ID123 -> fromjs_ID123. + + For rescoring, `student_answers` is None. + + Calls the Response for each question in this problem, to do the actual grading. + """ # old CorrectMap oldcmap = self.correct_map # start new with empty CorrectMap newcmap = CorrectMap() - # log.debug('Responders: %s' % self.responders) + # Call each responsetype instance to do actual grading for responder in self.responders.values(): - # File objects are passed only if responsetype explicitly allows for file - # submissions - if 'filesubmission' in responder.allowed_inputfields: - results = responder.evaluate_answers(answers, oldcmap) + # File objects are passed only if responsetype explicitly allows + # for file submissions. But we have no way of knowing if + # student_answers contains a proper answer or the filename of + # an earlier submission, so for now skip these entirely. + # TODO: figure out where to get file submissions when rescoring. + if 'filesubmission' in responder.allowed_inputfields and student_answers is None: + raise Exception("Cannot rescore problems with possible file submissions") + + # use 'student_answers' only if it is provided, and if it might contain a file + # submission that would not exist in the persisted "student_answers". + if 'filesubmission' in responder.allowed_inputfields and student_answers is not None: + results = responder.evaluate_answers(student_answers, oldcmap) else: - results = responder.evaluate_answers(convert_files_to_filenames(answers), oldcmap) + results = responder.evaluate_answers(self.student_answers, oldcmap) newcmap.update(results) + self.correct_map = newcmap - # log.debug('%s: in grade_answers, answers=%s, cmap=%s' % (self,answers,newcmap)) return newcmap def get_question_answers(self): @@ -331,7 +373,6 @@ class LoncapaProblem(object): html = contextualize_text(etree.tostring(self._extract_html(self.tree)), self.context) return html - def handle_input_ajax(self, get): ''' InputTypes can support specialized AJAX calls. Find the correct input and pass along the correct data @@ -348,8 +389,6 @@ class LoncapaProblem(object): log.warning("Could not find matching input for id: %s" % input_id) return {} - - # ======= Private Methods Below ======== def _process_includes(self): @@ -359,16 +398,16 @@ class LoncapaProblem(object): ''' includes = self.tree.findall('.//include') for inc in includes: - file = inc.get('file') - if file is not None: + filename = inc.get('file') + if filename is not None: try: # open using ModuleSystem OSFS filestore - ifp = self.system.filestore.open(file) + ifp = self.system.filestore.open(filename) except Exception as err: log.warning('Error %s in problem xml include: %s' % ( err, etree.tostring(inc, pretty_print=True))) log.warning('Cannot find file %s in %s' % ( - file, self.system.filestore)) + filename, self.system.filestore)) # if debugging, don't fail - just log error # TODO (vshnayder): need real error handling, display to users if not self.system.get('DEBUG'): @@ -381,7 +420,7 @@ class LoncapaProblem(object): except Exception as err: log.warning('Error %s in problem xml include: %s' % ( err, etree.tostring(inc, pretty_print=True))) - log.warning('Cannot parse XML in %s' % (file)) + log.warning('Cannot parse XML in %s' % (filename)) # if debugging, don't fail - just log error # TODO (vshnayder): same as above if not self.system.get('DEBUG'): @@ -389,11 +428,11 @@ class LoncapaProblem(object): else: continue - # insert new XML into tree in place of inlcude + # insert new XML into tree in place of include parent = inc.getparent() parent.insert(parent.index(inc), incxml) parent.remove(inc) - log.debug('Included %s into %s' % (file, self.problem_id)) + log.debug('Included %s into %s' % (filename, self.problem_id)) def _extract_system_path(self, script): """ @@ -463,13 +502,14 @@ class LoncapaProblem(object): if all_code: try: - safe_exec.safe_exec( + safe_exec( all_code, context, random_seed=self.seed, python_path=python_path, cache=self.system.cache, slug=self.problem_id, + unsafely=self.system.can_execute_unsafe_code(), ) except Exception as err: log.exception("Error while execing script code: " + all_code) @@ -518,18 +558,18 @@ class LoncapaProblem(object): value = "" if self.student_answers and problemid in self.student_answers: value = self.student_answers[problemid] - + if input_id not in self.input_state: self.input_state[input_id] = {} - + # do the rendering state = {'value': value, - 'status': status, - 'id': input_id, - 'input_state': self.input_state[input_id], - 'feedback': {'message': msg, - 'hint': hint, - 'hintmode': hintmode, }} + 'status': status, + 'id': input_id, + 'input_state': self.input_state[input_id], + 'feedback': {'message': msg, + 'hint': hint, + 'hintmode': hintmode, }} input_type_cls = inputtypes.registry.get_class_for_tag(problemtree.tag) # save the input type so that we can make ajax calls on it if we need to @@ -553,7 +593,7 @@ class LoncapaProblem(object): for item in problemtree: item_xhtml = self._extract_html(item) if item_xhtml is not None: - tree.append(item_xhtml) + tree.append(item_xhtml) if tree.tag in html_transforms: tree.tag = html_transforms[problemtree.tag]['tag'] diff --git a/common/lib/capa/capa/inputtypes.py b/common/lib/capa/capa/inputtypes.py index 65280d6d29..446b832dd7 100644 --- a/common/lib/capa/capa/inputtypes.py +++ b/common/lib/capa/capa/inputtypes.py @@ -144,11 +144,11 @@ class InputTypeBase(object): self.tag = xml.tag self.system = system - ## NOTE: ID should only come from one place. If it comes from multiple, - ## we use state first, XML second (in case the xml changed, but we have - ## existing state with an old id). Since we don't make this guarantee, - ## we can swap this around in the future if there's a more logical - ## order. + # NOTE: ID should only come from one place. If it comes from multiple, + # we use state first, XML second (in case the xml changed, but we have + # existing state with an old id). Since we don't make this guarantee, + # we can swap this around in the future if there's a more logical + # order. self.input_id = state.get('id', xml.get('id')) if self.input_id is None: @@ -769,7 +769,7 @@ class MatlabInput(CodeInput): # construct xqueue headers qinterface = self.system.xqueue['interface'] - qtime = datetime.strftime(datetime.utcnow(), xqueue_interface.dateformat) + qtime = datetime.utcnow().strftime(xqueue_interface.dateformat) callback_url = self.system.xqueue['construct_callback']('ungraded_response') anonymous_student_id = self.system.anonymous_student_id queuekey = xqueue_interface.make_hashkey(str(self.system.seed) + qtime + diff --git a/common/lib/capa/capa/responsetypes.py b/common/lib/capa/capa/responsetypes.py index 0fa50079de..80227490da 100644 --- a/common/lib/capa/capa/responsetypes.py +++ b/common/lib/capa/capa/responsetypes.py @@ -288,7 +288,14 @@ class LoncapaResponse(object): } try: - safe_exec.safe_exec(code, globals_dict, python_path=self.context['python_path'], slug=self.id) + safe_exec.safe_exec( + code, + globals_dict, + python_path=self.context['python_path'], + slug=self.id, + random_seed=self.context['seed'], + unsafely=self.system.can_execute_unsafe_code(), + ) except Exception as err: msg = 'Error %s in evaluating hint function %s' % (err, hintfn) msg += "\nSee XML source line %s" % getattr( @@ -973,7 +980,14 @@ class CustomResponse(LoncapaResponse): 'ans': ans, } globals_dict.update(kwargs) - safe_exec.safe_exec(code, globals_dict, python_path=self.context['python_path'], slug=self.id) + safe_exec.safe_exec( + code, + globals_dict, + python_path=self.context['python_path'], + slug=self.id, + random_seed=self.context['seed'], + unsafely=self.system.can_execute_unsafe_code(), + ) return globals_dict['cfn_return'] return check_function @@ -1090,7 +1104,14 @@ class CustomResponse(LoncapaResponse): # exec the check function if isinstance(self.code, basestring): try: - safe_exec.safe_exec(self.code, self.context, cache=self.system.cache, slug=self.id) + safe_exec.safe_exec( + self.code, + self.context, + cache=self.system.cache, + slug=self.id, + random_seed=self.context['seed'], + unsafely=self.system.can_execute_unsafe_code(), + ) except Exception as err: self._handle_exec_exception(err) @@ -1717,6 +1738,7 @@ class FormulaResponse(LoncapaResponse): student_variables = dict() # ranges give numerical ranges for testing for var in ranges: + # TODO: allow specified ranges (i.e. integers and complex numbers) for random variables value = random.uniform(*ranges[var]) instructor_variables[str(var)] = value student_variables[str(var)] = value @@ -1814,7 +1836,14 @@ class SchematicResponse(LoncapaResponse): ] self.context.update({'submission': submission}) try: - safe_exec.safe_exec(self.code, self.context, cache=self.system.cache, slug=self.id) + safe_exec.safe_exec( + self.code, + self.context, + cache=self.system.cache, + slug=self.id, + random_seed=self.context['seed'], + unsafely=self.system.can_execute_unsafe_code(), + ) except Exception as err: msg = 'Error %s in evaluating SchematicResponse' % err raise ResponseError(msg) diff --git a/common/lib/capa/capa/safe_exec/safe_exec.py b/common/lib/capa/capa/safe_exec/safe_exec.py index 67e93be46f..3ab8f0bf9e 100644 --- a/common/lib/capa/capa/safe_exec/safe_exec.py +++ b/common/lib/capa/capa/safe_exec/safe_exec.py @@ -1,6 +1,7 @@ """Capa's specialized use of codejail.safe_exec.""" from codejail.safe_exec import safe_exec as codejail_safe_exec +from codejail.safe_exec import not_safe_exec as codejail_not_safe_exec from codejail.safe_exec import json_safe, SafeExecException from . import lazymod from statsd import statsd @@ -71,7 +72,7 @@ def update_hash(hasher, obj): @statsd.timed('capa.safe_exec.time') -def safe_exec(code, globals_dict, random_seed=None, python_path=None, cache=None, slug=None): +def safe_exec(code, globals_dict, random_seed=None, python_path=None, cache=None, slug=None, unsafely=False): """ Execute python code safely. @@ -90,6 +91,8 @@ def safe_exec(code, globals_dict, random_seed=None, python_path=None, cache=None `slug` is an arbitrary string, a description that's meaningful to the caller, that will be used in log messages. + If `unsafely` is true, then the code will actually be executed without sandboxing. + """ # Check the cache for a previous result. if cache: @@ -111,9 +114,15 @@ def safe_exec(code, globals_dict, random_seed=None, python_path=None, cache=None # Create the complete code we'll run. code_prolog = CODE_PROLOG % random_seed + # Decide which code executor to use. + if unsafely: + exec_fn = codejail_not_safe_exec + else: + exec_fn = codejail_safe_exec + # Run the code! Results are side effects in globals_dict. try: - codejail_safe_exec( + exec_fn( code_prolog + LAZY_IMPORTS + code, globals_dict, python_path=python_path, slug=slug, ) diff --git a/common/lib/capa/capa/safe_exec/tests/test_safe_exec.py b/common/lib/capa/capa/safe_exec/tests/test_safe_exec.py index 4592af8305..f8a8a32297 100644 --- a/common/lib/capa/capa/safe_exec/tests/test_safe_exec.py +++ b/common/lib/capa/capa/safe_exec/tests/test_safe_exec.py @@ -1,13 +1,17 @@ """Test safe_exec.py""" import hashlib +import os import os.path import random import textwrap import unittest +from nose.plugins.skip import SkipTest + from capa.safe_exec import safe_exec, update_hash from codejail.safe_exec import SafeExecException +from codejail.jail_code import is_configured class TestSafeExec(unittest.TestCase): @@ -68,6 +72,24 @@ class TestSafeExec(unittest.TestCase): self.assertIn("ZeroDivisionError", cm.exception.message) +class TestSafeOrNot(unittest.TestCase): + def test_cant_do_something_forbidden(self): + # Can't test for forbiddenness if CodeJail isn't configured for python. + if not is_configured("python"): + raise SkipTest + + g = {} + with self.assertRaises(SafeExecException) as cm: + safe_exec("import os; files = os.listdir('/')", g) + self.assertIn("OSError", cm.exception.message) + self.assertIn("Permission denied", cm.exception.message) + + def test_can_do_something_forbidden_if_run_unsafely(self): + g = {} + safe_exec("import os; files = os.listdir('/')", g, unsafely=True) + self.assertEqual(g['files'], os.listdir('/')) + + class DictCache(object): """A cache implementation over a simple dict, for testing.""" diff --git a/common/lib/capa/capa/tests/test_responsetypes.py b/common/lib/capa/capa/tests/test_responsetypes.py index 780c475b09..68be54b6af 100644 --- a/common/lib/capa/capa/tests/test_responsetypes.py +++ b/common/lib/capa/capa/tests/test_responsetypes.py @@ -4,7 +4,6 @@ Tests of responsetypes from datetime import datetime import json -from nose.plugins.skip import SkipTest import os import random import unittest @@ -56,9 +55,18 @@ class ResponseTest(unittest.TestCase): self.assertEqual(result, 'incorrect', msg="%s should be marked incorrect" % str(input_str)) + def _get_random_number_code(self): + """Returns code to be used to generate a random result.""" + return "str(random.randint(0, 1e9))" + + def _get_random_number_result(self, seed_value): + """Returns a result that should be generated using the random_number_code.""" + rand = random.Random(seed_value) + return str(rand.randint(0, 1e9)) + class MultiChoiceResponseTest(ResponseTest): - from response_xml_factory import MultipleChoiceResponseXMLFactory + from capa.tests.response_xml_factory import MultipleChoiceResponseXMLFactory xml_factory_class = MultipleChoiceResponseXMLFactory def test_multiple_choice_grade(self): @@ -80,7 +88,7 @@ class MultiChoiceResponseTest(ResponseTest): class TrueFalseResponseTest(ResponseTest): - from response_xml_factory import TrueFalseResponseXMLFactory + from capa.tests.response_xml_factory import TrueFalseResponseXMLFactory xml_factory_class = TrueFalseResponseXMLFactory def test_true_false_grade(self): @@ -120,7 +128,7 @@ class TrueFalseResponseTest(ResponseTest): class ImageResponseTest(ResponseTest): - from response_xml_factory import ImageResponseXMLFactory + from capa.tests.response_xml_factory import ImageResponseXMLFactory xml_factory_class = ImageResponseXMLFactory def test_rectangle_grade(self): @@ -184,7 +192,7 @@ class ImageResponseTest(ResponseTest): class SymbolicResponseTest(ResponseTest): - from response_xml_factory import SymbolicResponseXMLFactory + from capa.tests.response_xml_factory import SymbolicResponseXMLFactory xml_factory_class = SymbolicResponseXMLFactory def test_grade_single_input(self): @@ -224,8 +232,8 @@ class SymbolicResponseTest(ResponseTest): def test_complex_number_grade(self): problem = self.build_problem(math_display=True, - expect="[[cos(theta),i*sin(theta)],[i*sin(theta),cos(theta)]]", - options=["matrix", "imaginary"]) + expect="[[cos(theta),i*sin(theta)],[i*sin(theta),cos(theta)]]", + options=["matrix", "imaginary"]) # For LaTeX-style inputs, symmath_check() will try to contact # a server to convert the input to MathML. @@ -312,16 +320,16 @@ class SymbolicResponseTest(ResponseTest): # Should not allow multiple inputs, since we specify # only one "expect" value with self.assertRaises(Exception): - problem = self.build_problem(math_display=True, - expect="2*x+3*y", - num_inputs=3) + self.build_problem(math_display=True, + expect="2*x+3*y", + num_inputs=3) def _assert_symbolic_grade(self, problem, - student_input, - dynamath_input, - expected_correctness): + student_input, + dynamath_input, + expected_correctness): input_dict = {'1_2_1': str(student_input), - '1_2_1_dynamath': str(dynamath_input)} + '1_2_1_dynamath': str(dynamath_input)} correct_map = problem.grade_answers(input_dict) @@ -330,7 +338,7 @@ class SymbolicResponseTest(ResponseTest): class OptionResponseTest(ResponseTest): - from response_xml_factory import OptionResponseXMLFactory + from capa.tests.response_xml_factory import OptionResponseXMLFactory xml_factory_class = OptionResponseXMLFactory def test_grade(self): @@ -350,7 +358,7 @@ class FormulaResponseTest(ResponseTest): """ Test the FormulaResponse class """ - from response_xml_factory import FormulaResponseXMLFactory + from capa.tests.response_xml_factory import FormulaResponseXMLFactory xml_factory_class = FormulaResponseXMLFactory def test_grade(self): @@ -570,7 +578,7 @@ class FormulaResponseTest(ResponseTest): class StringResponseTest(ResponseTest): - from response_xml_factory import StringResponseXMLFactory + from capa.tests.response_xml_factory import StringResponseXMLFactory xml_factory_class = StringResponseXMLFactory def test_case_sensitive(self): @@ -640,9 +648,25 @@ class StringResponseTest(ResponseTest): correct_map = problem.grade_answers(input_dict) self.assertEquals(correct_map.get_hint('1_2_1'), "Hello??") + def test_hint_function_randomization(self): + # The hint function should get the seed from the problem. + problem = self.build_problem( + answer="1", + hintfn="gimme_a_random_hint", + script=textwrap.dedent(""" + def gimme_a_random_hint(answer_ids, student_answers, new_cmap, old_cmap): + answer = {code} + new_cmap.set_hint_and_mode(answer_ids[0], answer, "always") + + """.format(code=self._get_random_number_code())) + ) + correct_map = problem.grade_answers({'1_2_1': '2'}) + hint = correct_map.get_hint('1_2_1') + self.assertEqual(hint, self._get_random_number_result(problem.seed)) + class CodeResponseTest(ResponseTest): - from response_xml_factory import CodeResponseXMLFactory + from capa.tests.response_xml_factory import CodeResponseXMLFactory xml_factory_class = CodeResponseXMLFactory def setUp(self): @@ -656,6 +680,7 @@ class CodeResponseTest(ResponseTest): @staticmethod def make_queuestate(key, time): + """Create queuestate dict""" timestr = datetime.strftime(time, dateformat) return {'key': key, 'time': timestr} @@ -693,7 +718,7 @@ class CodeResponseTest(ResponseTest): old_cmap = CorrectMap() for i, answer_id in enumerate(answer_ids): queuekey = 1000 + i - queuestate = CodeResponseTest.make_queuestate(1000 + i, datetime.now()) + queuestate = CodeResponseTest.make_queuestate(queuekey, datetime.now()) old_cmap.update(CorrectMap(answer_id=answer_ids[i], queuestate=queuestate)) # Message format common to external graders @@ -754,7 +779,7 @@ class CodeResponseTest(ResponseTest): for i, answer_id in enumerate(answer_ids): queuekey = 1000 + i latest_timestamp = datetime.now() - queuestate = CodeResponseTest.make_queuestate(1000 + i, latest_timestamp) + queuestate = CodeResponseTest.make_queuestate(queuekey, latest_timestamp) cmap.update(CorrectMap(answer_id=answer_id, queuestate=queuestate)) self.problem.correct_map.update(cmap) @@ -779,7 +804,7 @@ class CodeResponseTest(ResponseTest): class ChoiceResponseTest(ResponseTest): - from response_xml_factory import ChoiceResponseXMLFactory + from capa.tests.response_xml_factory import ChoiceResponseXMLFactory xml_factory_class = ChoiceResponseXMLFactory def test_radio_group_grade(self): @@ -811,7 +836,7 @@ class ChoiceResponseTest(ResponseTest): class JavascriptResponseTest(ResponseTest): - from response_xml_factory import JavascriptResponseXMLFactory + from capa.tests.response_xml_factory import JavascriptResponseXMLFactory xml_factory_class = JavascriptResponseXMLFactory def test_grade(self): @@ -841,7 +866,7 @@ class JavascriptResponseTest(ResponseTest): system.can_execute_unsafe_code = lambda: False with self.assertRaises(LoncapaProblemError): - problem = self.build_problem( + self.build_problem( system=system, generator_src="test_problem_generator.js", grader_src="test_problem_grader.js", @@ -852,7 +877,7 @@ class JavascriptResponseTest(ResponseTest): class NumericalResponseTest(ResponseTest): - from response_xml_factory import NumericalResponseXMLFactory + from capa.tests.response_xml_factory import NumericalResponseXMLFactory xml_factory_class = NumericalResponseXMLFactory def test_grade_exact(self): @@ -944,11 +969,10 @@ class NumericalResponseTest(ResponseTest): class CustomResponseTest(ResponseTest): - from response_xml_factory import CustomResponseXMLFactory + from capa.tests.response_xml_factory import CustomResponseXMLFactory xml_factory_class = CustomResponseXMLFactory def test_inline_code(self): - # For inline code, we directly modify global context variables # 'answers' is a list of answers provided to us # 'correct' is a list we fill in with True/False @@ -961,15 +985,14 @@ class CustomResponseTest(ResponseTest): self.assert_grade(problem, '0', 'incorrect') def test_inline_message(self): - # Inline code can update the global messages list # to pass messages to the CorrectMap for a particular input # The code can also set the global overall_message (str) # to pass a message that applies to the whole response inline_script = textwrap.dedent(""" - messages[0] = "Test Message" - overall_message = "Overall message" - """) + messages[0] = "Test Message" + overall_message = "Overall message" + """) problem = self.build_problem(answer=inline_script) input_dict = {'1_2_1': '0'} @@ -983,8 +1006,18 @@ class CustomResponseTest(ResponseTest): overall_msg = correctmap.get_overall_message() self.assertEqual(overall_msg, "Overall message") - def test_function_code_single_input(self): + def test_inline_randomization(self): + # Make sure the seed from the problem gets fed into the script execution. + inline_script = "messages[0] = {code}".format(code=self._get_random_number_code()) + problem = self.build_problem(answer=inline_script) + input_dict = {'1_2_1': '0'} + correctmap = problem.grade_answers(input_dict) + + input_msg = correctmap.get_msg('1_2_1') + self.assertEqual(input_msg, self._get_random_number_result(problem.seed)) + + def test_function_code_single_input(self): # For function code, we pass in these arguments: # # 'expect' is the expect attribute of the @@ -1212,6 +1245,27 @@ class CustomResponseTest(ResponseTest): with self.assertRaises(ResponseError): problem.grade_answers({'1_2_1': '42'}) + def test_setup_randomization(self): + # Ensure that the problem setup script gets the random seed from the problem. + script = textwrap.dedent(""" + num = {code} + """.format(code=self._get_random_number_code())) + problem = self.build_problem(script=script) + self.assertEqual(problem.context['num'], self._get_random_number_result(problem.seed)) + + def test_check_function_randomization(self): + # The check function should get random-seeded from the problem. + script = textwrap.dedent(""" + def check_func(expect, answer_given): + return {{'ok': True, 'msg': {code} }} + """.format(code=self._get_random_number_code())) + + problem = self.build_problem(script=script, cfn="check_func", expect="42") + input_dict = {'1_2_1': '42'} + correct_map = problem.grade_answers(input_dict) + msg = correct_map.get_msg('1_2_1') + self.assertEqual(msg, self._get_random_number_result(problem.seed)) + def test_module_imports_inline(self): ''' Check that the correct modules are available to custom @@ -1271,11 +1325,10 @@ class CustomResponseTest(ResponseTest): class SchematicResponseTest(ResponseTest): - from response_xml_factory import SchematicResponseXMLFactory + from capa.tests.response_xml_factory import SchematicResponseXMLFactory xml_factory_class = SchematicResponseXMLFactory def test_grade(self): - # Most of the schematic-specific work is handled elsewhere # (in client-side JavaScript) # The is responsible only for executing the @@ -1290,7 +1343,7 @@ class SchematicResponseTest(ResponseTest): # The actual dictionary would contain schematic information # sent from the JavaScript simulation - submission_dict = {'test': 'test'} + submission_dict = {'test': 'the_answer'} input_dict = {'1_2_1': json.dumps(submission_dict)} correct_map = problem.grade_answers(input_dict) @@ -1299,8 +1352,18 @@ class SchematicResponseTest(ResponseTest): # is what we expect) self.assertEqual(correct_map.get_correctness('1_2_1'), 'correct') - def test_script_exception(self): + def test_check_function_randomization(self): + # The check function should get a random seed from the problem. + script = "correct = ['correct' if (submission[0]['num'] == {code}) else 'incorrect']".format(code=self._get_random_number_code()) + problem = self.build_problem(answer=script) + submission_dict = {'num': self._get_random_number_result(problem.seed)} + input_dict = {'1_2_1': json.dumps(submission_dict)} + correct_map = problem.grade_answers(input_dict) + + self.assertEqual(correct_map.get_correctness('1_2_1'), 'correct') + + def test_script_exception(self): # Construct a script that will raise an exception script = "raise Exception('test')" problem = self.build_problem(answer=script) @@ -1313,7 +1376,7 @@ class SchematicResponseTest(ResponseTest): class AnnotationResponseTest(ResponseTest): - from response_xml_factory import AnnotationResponseXMLFactory + from capa.tests.response_xml_factory import AnnotationResponseXMLFactory xml_factory_class = AnnotationResponseXMLFactory def test_grade(self): @@ -1334,7 +1397,7 @@ class AnnotationResponseTest(ResponseTest): {'correctness': incorrect, 'points': 0, 'answers': {answer_id: 'null'}}, ] - for (index, test) in enumerate(tests): + for test in tests: expected_correctness = test['correctness'] expected_points = test['points'] answers = test['answers'] diff --git a/common/lib/xmodule/xmodule/abtest_module.py b/common/lib/xmodule/xmodule/abtest_module.py index 196154df78..2e61076e94 100644 --- a/common/lib/xmodule/xmodule/abtest_module.py +++ b/common/lib/xmodule/xmodule/abtest_module.py @@ -6,7 +6,7 @@ from xmodule.x_module import XModule from xmodule.raw_module import RawDescriptor from xmodule.xml_module import XmlDescriptor from xmodule.exceptions import InvalidDefinitionError -from xblock.core import String, Scope, Object +from xblock.core import String, Scope, Dict DEFAULT = "_DEFAULT_GROUP" @@ -32,9 +32,9 @@ def group_from_value(groups, v): class ABTestFields(object): - group_portions = Object(help="What proportions of students should go in each group", default={DEFAULT: 1}, scope=Scope.content) - group_assignments = Object(help="What group this user belongs to", scope=Scope.preferences, default={}) - group_content = Object(help="What content to display to each group", scope=Scope.content, default={DEFAULT: []}) + group_portions = Dict(help="What proportions of students should go in each group", default={DEFAULT: 1}, scope=Scope.content) + group_assignments = Dict(help="What group this user belongs to", scope=Scope.preferences, default={}) + group_content = Dict(help="What content to display to each group", scope=Scope.content, default={DEFAULT: []}) experiment = String(help="Experiment that this A/B test belongs to", scope=Scope.content) has_children = True diff --git a/common/lib/xmodule/xmodule/annotatable_module.py b/common/lib/xmodule/xmodule/annotatable_module.py index e0de97bb36..e8674360c3 100644 --- a/common/lib/xmodule/xmodule/annotatable_module.py +++ b/common/lib/xmodule/xmodule/annotatable_module.py @@ -125,6 +125,5 @@ class AnnotatableModule(AnnotatableFields, XModule): class AnnotatableDescriptor(AnnotatableFields, RawDescriptor): module_class = AnnotatableModule - stores_state = True template_dir_name = "annotatable" mako_template = "widgets/raw-edit.html" diff --git a/common/lib/xmodule/xmodule/capa_module.py b/common/lib/xmodule/xmodule/capa_module.py index 9e0ab16203..a03c0f4160 100644 --- a/common/lib/xmodule/xmodule/capa_module.py +++ b/common/lib/xmodule/xmodule/capa_module.py @@ -11,16 +11,16 @@ import sys from pkg_resources import resource_string from capa.capa_problem import LoncapaProblem -from capa.responsetypes import StudentInputError,\ +from capa.responsetypes import StudentInputError, \ ResponseError, LoncapaProblemError from capa.util import convert_files_to_filenames from .progress import Progress from xmodule.x_module import XModule from xmodule.raw_module import RawDescriptor from xmodule.exceptions import NotFoundError, ProcessingError -from xblock.core import Scope, String, Boolean, Object -from .fields import Timedelta, Date, StringyInteger, StringyFloat -from xmodule.util.date_utils import time_to_datetime +from xblock.core import Scope, String, Boolean, Dict, Integer, Float +from .fields import Timedelta, Date +from django.utils.timezone import UTC log = logging.getLogger("mitx.courseware") @@ -65,8 +65,8 @@ class ComplexEncoder(json.JSONEncoder): class CapaFields(object): - attempts = StringyInteger(help="Number of attempts taken by the student on this problem", default=0, scope=Scope.user_state) - max_attempts = StringyInteger( + attempts = Integer(help="Number of attempts taken by the student on this problem", default=0, scope=Scope.user_state) + max_attempts = Integer( display_name="Maximum Attempts", help="Defines the number of times a student can try to answer this problem. If the value is not set, infinite attempts are allowed.", values={"min": 0}, scope=Scope.settings @@ -95,12 +95,12 @@ class CapaFields(object): {"display_name": "Per Student", "value": "per_student"}] ) data = String(help="XML data for the problem", scope=Scope.content) - correct_map = Object(help="Dictionary with the correctness of current student answers", scope=Scope.user_state, default={}) - input_state = Object(help="Dictionary for maintaining the state of inputtypes", scope=Scope.user_state) - student_answers = Object(help="Dictionary with the current student responses", scope=Scope.user_state) + correct_map = Dict(help="Dictionary with the correctness of current student answers", scope=Scope.user_state, default={}) + input_state = Dict(help="Dictionary for maintaining the state of inputtypes", scope=Scope.user_state) + student_answers = Dict(help="Dictionary with the current student responses", scope=Scope.user_state) done = Boolean(help="Whether the student has answered the problem", scope=Scope.user_state) - seed = StringyInteger(help="Random seed for this student", scope=Scope.user_state) - weight = StringyFloat( + seed = Integer(help="Random seed for this student", scope=Scope.user_state) + weight = Float( display_name="Problem Weight", help="Defines the number of points each problem is worth. If the value is not set, each response field in the problem is worth one point.", values={"min": 0, "step": .1}, @@ -117,6 +117,8 @@ class CapaModule(CapaFields, XModule): ''' An XModule implementing LonCapa format problems, implemented by way of capa.capa_problem.LoncapaProblem + + CapaModule.__init__ takes the same arguments as xmodule.x_module:XModule.__init__ ''' icon_class = 'problem' @@ -131,10 +133,11 @@ class CapaModule(CapaFields, XModule): js_module_name = "Problem" css = {'scss': [resource_string(__name__, 'css/capa/display.scss')]} - def __init__(self, system, location, descriptor, model_data): - XModule.__init__(self, system, location, descriptor, model_data) + def __init__(self, *args, **kwargs): + """ Accepts the same arguments as xmodule.x_module:XModule.__init__ """ + XModule.__init__(self, *args, **kwargs) - due_date = time_to_datetime(self.due) + due_date = self.due if self.graceperiod is not None and due_date: self.close_date = due_date + self.graceperiod @@ -315,7 +318,7 @@ class CapaModule(CapaFields, XModule): # If the user has forced the save button to display, # then show it as long as the problem is not closed # (past due / too many attempts) - if self.force_save_button == "true": + if self.force_save_button: return not self.closed() else: is_survey_question = (self.max_attempts == 0) @@ -421,7 +424,7 @@ class CapaModule(CapaFields, XModule): # If we cannot construct the problem HTML, # then generate an error message instead. - except Exception, err: + except Exception as err: html = self.handle_problem_html_error(err) # The convention is to pass the name of the check button @@ -502,7 +505,7 @@ class CapaModule(CapaFields, XModule): Is it now past this problem's due date, including grace period? """ return (self.close_date is not None and - datetime.datetime.utcnow() > self.close_date) + datetime.datetime.now(UTC()) > self.close_date) def closed(self): ''' Is the student still allowed to submit answers? ''' @@ -652,7 +655,7 @@ class CapaModule(CapaFields, XModule): @staticmethod def make_dict_of_responses(get): '''Make dictionary of student responses (aka "answers") - get is POST dictionary (Djano QueryDict). + get is POST dictionary (Django QueryDict). The *get* dict has keys of the form 'x_y', which are mapped to key 'y' in the returned dict. For example, @@ -736,18 +739,18 @@ class CapaModule(CapaFields, XModule): # Too late. Cannot submit if self.closed(): event_info['failure'] = 'closed' - self.system.track_function('save_problem_check_fail', event_info) + self.system.track_function('problem_check_fail', event_info) raise NotFoundError('Problem is closed') # Problem submitted. Student should reset before checking again if self.done and self.rerandomize == "always": event_info['failure'] = 'unreset' - self.system.track_function('save_problem_check_fail', event_info) + self.system.track_function('problem_check_fail', event_info) raise NotFoundError('Problem must be reset before it can be checked again') # Problem queued. Students must wait a specified waittime before they are allowed to submit if self.lcp.is_queued(): - current_time = datetime.datetime.now() + current_time = datetime.datetime.now(UTC()) prev_submit_time = self.lcp.get_recentmost_queuetime() waittime_between_requests = self.system.xqueue['waittime'] if (current_time - prev_submit_time).total_seconds() < waittime_between_requests: @@ -756,6 +759,8 @@ class CapaModule(CapaFields, XModule): try: correct_map = self.lcp.grade_answers(answers) + self.attempts = self.attempts + 1 + self.lcp.done = True self.set_state_from_lcp() except (StudentInputError, ResponseError, LoncapaProblemError) as inst: @@ -775,17 +780,13 @@ class CapaModule(CapaFields, XModule): return {'success': msg} - except Exception, err: + except Exception as err: if self.system.DEBUG: msg = "Error checking problem: " + str(err) msg += '\nTraceback:\n' + traceback.format_exc() return {'success': msg} raise - self.attempts = self.attempts + 1 - self.lcp.done = True - - self.set_state_from_lcp() self.publish_grade() # success = correct if ALL questions in this problem are correct @@ -799,7 +800,7 @@ class CapaModule(CapaFields, XModule): event_info['correct_map'] = correct_map.get_dict() event_info['success'] = success event_info['attempts'] = self.attempts - self.system.track_function('save_problem_check', event_info) + self.system.track_function('problem_check', event_info) if hasattr(self.system, 'psychometrics_handler'): # update PsychometricsData using callback self.system.psychometrics_handler(self.get_state_for_lcp()) @@ -811,12 +812,92 @@ class CapaModule(CapaFields, XModule): 'contents': html, } + def rescore_problem(self): + """ + Checks whether the existing answers to a problem are correct. + + This is called when the correct answer to a problem has been changed, + and the grade should be re-evaluated. + + Returns a dict with one key: + {'success' : 'correct' | 'incorrect' | AJAX alert msg string } + + Raises NotFoundError if called on a problem that has not yet been + answered, or NotImplementedError if it's a problem that cannot be rescored. + + Returns the error messages for exceptions occurring while performing + the rescoring, rather than throwing them. + """ + event_info = {'state': self.lcp.get_state(), 'problem_id': self.location.url()} + + if not self.lcp.supports_rescoring(): + event_info['failure'] = 'unsupported' + self.system.track_function('problem_rescore_fail', event_info) + raise NotImplementedError("Problem's definition does not support rescoring") + + if not self.done: + event_info['failure'] = 'unanswered' + self.system.track_function('problem_rescore_fail', event_info) + raise NotFoundError('Problem must be answered before it can be graded again') + + # get old score, for comparison: + orig_score = self.lcp.get_score() + event_info['orig_score'] = orig_score['score'] + event_info['orig_total'] = orig_score['total'] + + try: + correct_map = self.lcp.rescore_existing_answers() + + except (StudentInputError, ResponseError, LoncapaProblemError) as inst: + log.warning("Input error in capa_module:problem_rescore", exc_info=True) + event_info['failure'] = 'input_error' + self.system.track_function('problem_rescore_fail', event_info) + return {'success': u"Error: {0}".format(inst.message)} + + except Exception as err: + event_info['failure'] = 'unexpected' + self.system.track_function('problem_rescore_fail', event_info) + if self.system.DEBUG: + msg = u"Error checking problem: {0}".format(err.message) + msg += u'\nTraceback:\n' + traceback.format_exc() + return {'success': msg} + raise + + # rescoring should have no effect on attempts, so don't + # need to increment here, or mark done. Just save. + self.set_state_from_lcp() + + self.publish_grade() + + new_score = self.lcp.get_score() + event_info['new_score'] = new_score['score'] + event_info['new_total'] = new_score['total'] + + # success = correct if ALL questions in this problem are correct + success = 'correct' + for answer_id in correct_map: + if not correct_map.is_correct(answer_id): + success = 'incorrect' + + # NOTE: We are logging both full grading and queued-grading submissions. In the latter, + # 'success' will always be incorrect + event_info['correct_map'] = correct_map.get_dict() + event_info['success'] = success + event_info['attempts'] = self.attempts + self.system.track_function('problem_rescore', event_info) + + # psychometrics should be called on rescoring requests in the same way as check-problem + if hasattr(self.system, 'psychometrics_handler'): # update PsychometricsData using callback + self.system.psychometrics_handler(self.get_state_for_lcp()) + + return {'success': success} + def save_problem(self, get): - ''' + """ Save the passed in answers. - Returns a dict { 'success' : bool, ['error' : error-msg]}, - with the error key only present if success is False. - ''' + Returns a dict { 'success' : bool, 'msg' : message } + The message is informative on success, and an error message on failure. + """ event_info = dict() event_info['state'] = self.lcp.get_state() event_info['problem_id'] = self.location.url() @@ -902,7 +983,6 @@ class CapaDescriptor(CapaFields, RawDescriptor): module_class = CapaModule - stores_state = True has_score = True template_dir_name = 'problem' mako_template = "widgets/problem-edit.html" diff --git a/common/lib/xmodule/xmodule/combined_open_ended_module.py b/common/lib/xmodule/xmodule/combined_open_ended_module.py index b3f0e19109..ac95567946 100644 --- a/common/lib/xmodule/xmodule/combined_open_ended_module.py +++ b/common/lib/xmodule/xmodule/combined_open_ended_module.py @@ -5,10 +5,10 @@ from pkg_resources import resource_string from xmodule.raw_module import RawDescriptor from .x_module import XModule -from xblock.core import Integer, Scope, String, List +from xblock.core import Integer, Scope, String, List, Float, Boolean from xmodule.open_ended_grading_classes.combined_open_ended_modulev1 import CombinedOpenEndedV1Module, CombinedOpenEndedV1Descriptor from collections import namedtuple -from .fields import Date, StringyFloat, StringyInteger, StringyBoolean +from .fields import Date log = logging.getLogger("mitx.courseware") @@ -53,27 +53,27 @@ class CombinedOpenEndedFields(object): help="This name appears in the horizontal navigation at the top of the page.", default="Open Ended Grading", scope=Scope.settings ) - current_task_number = StringyInteger(help="Current task that the student is on.", default=0, scope=Scope.user_state) + current_task_number = Integer(help="Current task that the student is on.", default=0, scope=Scope.user_state) task_states = List(help="List of state dictionaries of each task within this module.", scope=Scope.user_state) state = String(help="Which step within the current task that the student is on.", default="initial", scope=Scope.user_state) - student_attempts = StringyInteger(help="Number of attempts taken by the student on this problem", default=0, + student_attempts = Integer(help="Number of attempts taken by the student on this problem", default=0, scope=Scope.user_state) - ready_to_reset = StringyBoolean( + ready_to_reset = Boolean( help="If the problem is ready to be reset or not.", default=False, scope=Scope.user_state ) - attempts = StringyInteger( + attempts = Integer( display_name="Maximum Attempts", help="The number of times the student can try to answer this problem.", default=1, scope=Scope.settings, values = {"min" : 1 } ) - is_graded = StringyBoolean(display_name="Graded", help="Whether or not the problem is graded.", default=False, scope=Scope.settings) - accept_file_upload = StringyBoolean( + is_graded = Boolean(display_name="Graded", help="Whether or not the problem is graded.", default=False, scope=Scope.settings) + accept_file_upload = Boolean( display_name="Allow File Uploads", help="Whether or not the student can submit files as a response.", default=False, scope=Scope.settings ) - skip_spelling_checks = StringyBoolean( + skip_spelling_checks = Boolean( display_name="Disable Quality Filter", help="If False, the Quality Filter is enabled and submissions with poor spelling, short length, or poor grammar will not be peer reviewed.", default=False, scope=Scope.settings @@ -86,7 +86,7 @@ class CombinedOpenEndedFields(object): ) version = VersionInteger(help="Current version number", default=DEFAULT_VERSION, scope=Scope.settings) data = String(help="XML data for the problem", scope=Scope.content) - weight = StringyFloat( + weight = Float( display_name="Problem Weight", help="Defines the number of points each problem is worth. If the value is not set, each problem is worth one point.", scope=Scope.settings, values = {"min" : 0 , "step": ".1"} @@ -116,6 +116,8 @@ class CombinedOpenEndedModule(CombinedOpenEndedFields, XModule): incorporates multiple children (tasks): openendedmodule selfassessmentmodule + + CombinedOpenEndedModule.__init__ takes the same arguments as xmodule.x_module:XModule.__init__ """ STATE_VERSION = 1 @@ -139,8 +141,7 @@ class CombinedOpenEndedModule(CombinedOpenEndedFields, XModule): css = {'scss': [resource_string(__name__, 'css/combinedopenended/display.scss')]} - def __init__(self, system, location, descriptor, model_data): - XModule.__init__(self, system, location, descriptor, model_data) + def __init__(self, *args, **kwargs): """ Definition file should have one or many task blocks, a rubric block, and a prompt block: @@ -175,9 +176,9 @@ class CombinedOpenEndedModule(CombinedOpenEndedFields, XModule): """ + XModule.__init__(self, *args, **kwargs) - self.system = system - self.system.set('location', location) + self.system.set('location', self.location) if self.task_states is None: self.task_states = [] @@ -189,13 +190,11 @@ class CombinedOpenEndedModule(CombinedOpenEndedFields, XModule): attributes = self.student_attributes + self.settings_attributes - static_data = { - 'rewrite_content_links': self.rewrite_content_links, - } + static_data = {} instance_state = {k: getattr(self, k) for k in attributes} self.child_descriptor = version_tuple.descriptor(self.system) self.child_definition = version_tuple.descriptor.definition_from_xml(etree.fromstring(self.data), self.system) - self.child_module = version_tuple.module(self.system, location, self.child_definition, self.child_descriptor, + self.child_module = version_tuple.module(self.system, self.location, self.child_definition, self.child_descriptor, instance_state=instance_state, static_data=static_data, attributes=attributes) self.save_instance_data() @@ -239,7 +238,6 @@ class CombinedOpenEndedDescriptor(CombinedOpenEndedFields, RawDescriptor): mako_template = "widgets/open-ended-edit.html" module_class = CombinedOpenEndedModule - stores_state = True has_score = True always_recalculate_grades = True template_dir_name = "combinedopenended" diff --git a/common/lib/xmodule/xmodule/conditional_module.py b/common/lib/xmodule/xmodule/conditional_module.py index e669046ecb..9fda387ecb 100644 --- a/common/lib/xmodule/xmodule/conditional_module.py +++ b/common/lib/xmodule/xmodule/conditional_module.py @@ -92,7 +92,7 @@ class ConditionalModule(ConditionalFields, XModule): if xml_value and self.required_modules: for module in self.required_modules: if not hasattr(module, attr_name): - # We don't throw an exception here because it is possible for + # We don't throw an exception here because it is possible for # the descriptor of a required module to have a property but # for the resulting module to be a (flavor of) ErrorModule. # So just log and return false. @@ -161,7 +161,6 @@ class ConditionalDescriptor(ConditionalFields, SequenceDescriptor): filename_extension = "xml" - stores_state = True has_score = False @staticmethod diff --git a/common/lib/xmodule/xmodule/contentstore/django.py b/common/lib/xmodule/xmodule/contentstore/django.py index 83a2508d96..f163348cc8 100644 --- a/common/lib/xmodule/xmodule/contentstore/django.py +++ b/common/lib/xmodule/xmodule/contentstore/django.py @@ -3,7 +3,7 @@ from importlib import import_module from django.conf import settings -_CONTENTSTORE = None +_CONTENTSTORE = {} def load_function(path): @@ -17,13 +17,16 @@ def load_function(path): return getattr(import_module(module_path), name) -def contentstore(): +def contentstore(name='default'): global _CONTENTSTORE - if _CONTENTSTORE is None: + if name not in _CONTENTSTORE: class_ = load_function(settings.CONTENTSTORE['ENGINE']) options = {} options.update(settings.CONTENTSTORE['OPTIONS']) - _CONTENTSTORE = class_(**options) + if 'ADDITIONAL_OPTIONS' in settings.CONTENTSTORE: + if name in settings.CONTENTSTORE['ADDITIONAL_OPTIONS']: + options.update(settings.CONTENTSTORE['ADDITIONAL_OPTIONS'][name]) + _CONTENTSTORE[name] = class_(**options) - return _CONTENTSTORE + return _CONTENTSTORE[name] diff --git a/common/lib/xmodule/xmodule/contentstore/mongo.py b/common/lib/xmodule/xmodule/contentstore/mongo.py index 58fadb7957..fa0fc95181 100644 --- a/common/lib/xmodule/xmodule/contentstore/mongo.py +++ b/common/lib/xmodule/xmodule/contentstore/mongo.py @@ -1,4 +1,3 @@ -from bson.son import SON from pymongo import Connection import gridfs from gridfs.errors import NoFile @@ -15,15 +14,16 @@ import os class MongoContentStore(ContentStore): - def __init__(self, host, db, port=27017, user=None, password=None, **kwargs): + def __init__(self, host, db, port=27017, user=None, password=None, bucket='fs', **kwargs): logging.debug('Using MongoDB for static content serving at host={0} db={1}'.format(host, db)) _db = Connection(host=host, port=port, **kwargs)[db] if user is not None and password is not None: _db.authenticate(user, password) - self.fs = gridfs.GridFS(_db) - self.fs_files = _db["fs.files"] # the underlying collection GridFS uses + self.fs = gridfs.GridFS(_db, bucket) + + self.fs_files = _db[bucket + ".files"] # the underlying collection GridFS uses def save(self, content): id = content.get_id() @@ -43,7 +43,7 @@ class MongoContentStore(ContentStore): if self.fs.exists({"_id": id}): self.fs.delete(id) - def find(self, location): + def find(self, location, throw_on_not_found=True): id = StaticContent.get_id_from_location(location) try: with self.fs.get(id) as fp: @@ -52,7 +52,10 @@ class MongoContentStore(ContentStore): thumbnail_location=fp.thumbnail_location if hasattr(fp, 'thumbnail_location') else None, import_path=fp.import_path if hasattr(fp, 'import_path') else None) except NoFile: - raise NotFoundError() + if throw_on_not_found: + raise NotFoundError() + else: + return None def export(self, location, output_directory): content = self.find(location) diff --git a/common/lib/xmodule/xmodule/contentstore/utils.py b/common/lib/xmodule/xmodule/contentstore/utils.py new file mode 100644 index 0000000000..74c4242bd9 --- /dev/null +++ b/common/lib/xmodule/xmodule/contentstore/utils.py @@ -0,0 +1,49 @@ +from xmodule.modulestore import Location +from xmodule.contentstore.content import StaticContent +from .django import contentstore + + +def empty_asset_trashcan(course_locs): + ''' + This method will hard delete all assets (optionally within a course_id) from the trashcan + ''' + store = contentstore('trashcan') + + for course_loc in course_locs: + # first delete all of the thumbnails + thumbs = store.get_all_content_thumbnails_for_course(course_loc) + for thumb in thumbs: + thumb_loc = Location(thumb["_id"]) + id = StaticContent.get_id_from_location(thumb_loc) + print "Deleting {0}...".format(id) + store.delete(id) + + # then delete all of the assets + assets = store.get_all_content_for_course(course_loc) + for asset in assets: + asset_loc = Location(asset["_id"]) + id = StaticContent.get_id_from_location(asset_loc) + print "Deleting {0}...".format(id) + store.delete(id) + + +def restore_asset_from_trashcan(location): + ''' + This method will restore an asset which got soft deleted and put back in the original course + ''' + trash = contentstore('trashcan') + store = contentstore() + + loc = StaticContent.get_location_from_path(location) + content = trash.find(loc) + + # ok, save the content into the courseware + store.save(content) + + # see if there is a thumbnail as well, if so move that as well + if content.thumbnail_location is not None: + try: + thumbnail_content = trash.find(content.thumbnail_location) + store.save(thumbnail_content) + except: + pass # OK if this is left dangling diff --git a/common/lib/xmodule/xmodule/course_module.py b/common/lib/xmodule/xmodule/course_module.py index 063e53aef4..945c3a3cfa 100644 --- a/common/lib/xmodule/xmodule/course_module.py +++ b/common/lib/xmodule/xmodule/course_module.py @@ -4,21 +4,20 @@ from math import exp from lxml import etree from path import path # NOTE (THK): Only used for detecting presence of syllabus import requests -import time from datetime import datetime import dateutil.parser from xmodule.modulestore import Location from xmodule.seq_module import SequenceDescriptor, SequenceModule -from xmodule.timeparse import parse_time from xmodule.util.decorators import lazyproperty from xmodule.graders import grader_from_conf -from xmodule.util.date_utils import time_to_datetime import json -from xblock.core import Scope, List, String, Object, Boolean +from xblock.core import Scope, List, String, Dict, Boolean from .fields import Date +from django.utils.timezone import UTC +from xmodule.util import date_utils log = logging.getLogger(__name__) @@ -93,7 +92,7 @@ class Textbook(object): # see if we already fetched this if toc_url in _cached_toc: (table_of_contents, timestamp) = _cached_toc[toc_url] - age = datetime.now() - timestamp + age = datetime.now(UTC) - timestamp # expire every 10 minutes if age.seconds < 600: return table_of_contents @@ -154,25 +153,25 @@ class CourseFields(object): start = Date(help="Start time when this module is visible", scope=Scope.settings) end = Date(help="Date that this class ends", scope=Scope.settings) advertised_start = String(help="Date that this course is advertised to start", scope=Scope.settings) - grading_policy = Object(help="Grading policy definition for this class", scope=Scope.content) + grading_policy = Dict(help="Grading policy definition for this class", scope=Scope.content) show_calculator = Boolean(help="Whether to show the calculator in this course", default=False, scope=Scope.settings) display_name = String(help="Display name for this module", scope=Scope.settings) tabs = List(help="List of tabs to enable in this course", scope=Scope.settings) end_of_course_survey_url = String(help="Url for the end-of-course survey", scope=Scope.settings) discussion_blackouts = List(help="List of pairs of start/end dates for discussion blackouts", scope=Scope.settings) - discussion_topics = Object( + discussion_topics = Dict( help="Map of topics names to ids", scope=Scope.settings ) - testcenter_info = Object(help="Dictionary of Test Center info", scope=Scope.settings) + testcenter_info = Dict(help="Dictionary of Test Center info", scope=Scope.settings) announcement = Date(help="Date this course is announced", scope=Scope.settings) - cohort_config = Object(help="Dictionary defining cohort configuration", scope=Scope.settings) + cohort_config = Dict(help="Dictionary defining cohort configuration", scope=Scope.settings) is_new = Boolean(help="Whether this course should be flagged as new", scope=Scope.settings) no_grade = Boolean(help="True if this course isn't graded", default=False, scope=Scope.settings) disable_progress_graph = Boolean(help="True if this course shouldn't display the progress graph", default=False, scope=Scope.settings) pdf_textbooks = List(help="List of dictionaries containing pdf_textbook configuration", scope=Scope.settings) html_textbooks = List(help="List of dictionaries containing html_textbook configuration", scope=Scope.settings) - remote_gradebook = Object(scope=Scope.settings) + remote_gradebook = Dict(scope=Scope.settings) allow_anonymous = Boolean(scope=Scope.settings, default=True) allow_anonymous_to_peers = Boolean(scope=Scope.settings, default=False) advanced_modules = List(help="Beta modules used in your course", scope=Scope.settings) @@ -219,8 +218,7 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): msg = None if self.start is None: msg = "Course loaded without a valid start date. id = %s" % self.id - # hack it -- start in 1970 - self.start = time.gmtime(0) + self.start = datetime.now(UTC()) log.critical(msg) self.system.error_tracker(msg) @@ -392,7 +390,7 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): textbook_xml_object.set('book_url', textbook.book_url) xml_object.append(textbook_xml_object) - + return xml_object def has_ended(self): @@ -403,10 +401,10 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): if self.end is None: return False - return time.gmtime() > self.end + return datetime.now(UTC()) > self.end def has_started(self): - return time.gmtime() > self.start + return datetime.now(UTC()) > self.start @property def grader(self): @@ -547,14 +545,16 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): announcement = self.announcement if announcement is not None: - announcement = time_to_datetime(announcement) + announcement = announcement try: start = dateutil.parser.parse(self.advertised_start) + if start.tzinfo is None: + start = start.replace(tzinfo=UTC()) except (ValueError, AttributeError): - start = time_to_datetime(self.start) + start = self.start - now = datetime.utcnow() + now = datetime.now(UTC()) return announcement, start, now @@ -644,8 +644,11 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): def start_date_text(self): def try_parse_iso_8601(text): try: - result = datetime.strptime(text, "%Y-%m-%dT%H:%M") - result = result.strftime("%b %d, %Y") + result = Date().from_json(text) + if result is None: + result = text.title() + else: + result = result.strftime("%b %d, %Y") except ValueError: result = text.title() @@ -656,7 +659,7 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): elif self.advertised_start is None and self.start is None: return 'TBD' else: - return time.strftime("%b %d, %Y", self.advertised_start or self.start) + return (self.advertised_start or self.start).strftime("%b %d, %Y") @property def end_date_text(self): @@ -665,15 +668,17 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): If the course does not have an end date set (course.end is None), an empty string will be returned. """ - return '' if self.end is None else time.strftime("%b %d, %Y", self.end) + return '' if self.end is None else self.end.strftime("%b %d, %Y") @property def forum_posts_allowed(self): + date_proxy = Date() try: - blackout_periods = [(parse_time(start), parse_time(end)) + blackout_periods = [(date_proxy.from_json(start), + date_proxy.from_json(end)) for start, end in self.discussion_blackouts] - now = time.gmtime() + now = datetime.now(UTC()) for start, end in blackout_periods: if start <= now <= end: return False @@ -699,7 +704,8 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): self.last_eligible_appointment_date = self._try_parse_time('Last_Eligible_Appointment_Date') # or self.first_eligible_appointment_date if self.last_eligible_appointment_date is None: raise ValueError("Last appointment date must be specified") - self.registration_start_date = self._try_parse_time('Registration_Start_Date') or time.gmtime(0) + self.registration_start_date = (self._try_parse_time('Registration_Start_Date') or + datetime.fromtimestamp(0, UTC())) self.registration_end_date = self._try_parse_time('Registration_End_Date') or self.last_eligible_appointment_date # do validation within the exam info: if self.registration_start_date > self.registration_end_date: @@ -718,39 +724,39 @@ class CourseDescriptor(CourseFields, SequenceDescriptor): """ if key in self.exam_info: try: - return parse_time(self.exam_info[key]) + return Date().from_json(self.exam_info[key]) except ValueError as e: msg = "Exam {0} in course {1} loaded with a bad exam_info key '{2}': '{3}'".format(self.exam_name, self.course_id, self.exam_info[key], e) log.warning(msg) return None def has_started(self): - return time.gmtime() > self.first_eligible_appointment_date + return datetime.now(UTC()) > self.first_eligible_appointment_date def has_ended(self): - return time.gmtime() > self.last_eligible_appointment_date + return datetime.now(UTC()) > self.last_eligible_appointment_date def has_started_registration(self): - return time.gmtime() > self.registration_start_date + return datetime.now(UTC()) > self.registration_start_date def has_ended_registration(self): - return time.gmtime() > self.registration_end_date + return datetime.now(UTC()) > self.registration_end_date def is_registering(self): - now = time.gmtime() + now = datetime.now(UTC()) return now >= self.registration_start_date and now <= self.registration_end_date @property def first_eligible_appointment_date_text(self): - return time.strftime("%b %d, %Y", self.first_eligible_appointment_date) + return self.first_eligible_appointment_date.strftime("%b %d, %Y") @property def last_eligible_appointment_date_text(self): - return time.strftime("%b %d, %Y", self.last_eligible_appointment_date) + return self.last_eligible_appointment_date.strftime("%b %d, %Y") @property def registration_end_date_text(self): - return time.strftime("%b %d, %Y at %H:%M UTC", self.registration_end_date) + return date_utils.get_default_time_display(self.registration_end_date) @property def current_test_center_exam(self): diff --git a/common/lib/xmodule/xmodule/error_module.py b/common/lib/xmodule/xmodule/error_module.py index e998ceb58e..a37081c447 100644 --- a/common/lib/xmodule/xmodule/error_module.py +++ b/common/lib/xmodule/xmodule/error_module.py @@ -94,11 +94,11 @@ class ErrorDescriptor(ErrorFields, JSONEditingDescriptor): model_data = { 'error_msg': str(error_msg), 'contents': contents, - 'display_name': 'Error: ' + location.name + 'display_name': 'Error: ' + location.name, + 'location': location, } return cls( system, - location, model_data, ) diff --git a/common/lib/xmodule/xmodule/fields.py b/common/lib/xmodule/xmodule/fields.py index 3d56b7941e..8a74856fc1 100644 --- a/common/lib/xmodule/xmodule/fields.py +++ b/common/lib/xmodule/xmodule/fields.py @@ -2,20 +2,41 @@ import time import logging import re -from datetime import timedelta from xblock.core import ModelType import datetime import dateutil.parser -from xblock.core import Integer, Float, Boolean +from pytz import UTC log = logging.getLogger(__name__) class Date(ModelType): ''' - Date fields know how to parse and produce json (iso) compatible formats. + Date fields know how to parse and produce json (iso) compatible formats. Converts to tz aware datetimes. ''' + # See note below about not defaulting these + CURRENT_YEAR = datetime.datetime.now(UTC).year + PREVENT_DEFAULT_DAY_MON_SEED1 = datetime.datetime(CURRENT_YEAR, 1, 1, tzinfo=UTC) + PREVENT_DEFAULT_DAY_MON_SEED2 = datetime.datetime(CURRENT_YEAR, 2, 2, tzinfo=UTC) + + def _parse_date_wo_default_month_day(self, field): + """ + Parse the field as an iso string but prevent dateutils from defaulting the day or month while + allowing it to default the other fields. + """ + # It's not trivial to replace dateutil b/c parsing timezones as Z, +03:30, -400 is hard in python + # however, we don't want dateutil to default the month or day (but some tests at least expect + # us to default year); so, we'll see if dateutil uses the defaults for these the hard way + result = dateutil.parser.parse(field, default=self.PREVENT_DEFAULT_DAY_MON_SEED1) + result_other = dateutil.parser.parse(field, default=self.PREVENT_DEFAULT_DAY_MON_SEED2) + if result != result_other: + log.warning("Field {0} is missing month or day".format(self._name, field)) + return None + if result.tzinfo is None: + result = result.replace(tzinfo=UTC) + return result + def from_json(self, field): """ Parse an optional metadata key containing a time: if present, complain @@ -27,11 +48,12 @@ class Date(ModelType): elif field is "": return None elif isinstance(field, basestring): - d = dateutil.parser.parse(field) - return d.utctimetuple() + return self._parse_date_wo_default_month_day(field) elif isinstance(field, (int, long, float)): - return time.gmtime(field / 1000) + return datetime.datetime.fromtimestamp(field / 1000, UTC) elif isinstance(field, time.struct_time): + return datetime.datetime.fromtimestamp(time.mktime(field), UTC) + elif isinstance(field, datetime.datetime): return field else: msg = "Field {0} has bad value '{1}'".format( @@ -49,7 +71,11 @@ class Date(ModelType): # struct_times are always utc return time.strftime('%Y-%m-%dT%H:%M:%SZ', value) elif isinstance(value, datetime.datetime): - return value.isoformat() + 'Z' + if value.tzinfo is None or value.utcoffset().total_seconds() == 0: + # isoformat adds +00:00 rather than Z + return value.strftime('%Y-%m-%dT%H:%M:%SZ') + else: + return value.isoformat() TIMEDELTA_REGEX = re.compile(r'^((?P\d+?) day(?:s?))?(\s)?((?P\d+?) hour(?:s?))?(\s)?((?P\d+?) minute(?:s)?)?(\s)?((?P\d+?) second(?:s)?)?$') @@ -66,6 +92,8 @@ class Timedelta(ModelType): Returns a datetime.timedelta parsed from the string """ + if time_str is None: + return None parts = TIMEDELTA_REGEX.match(time_str) if not parts: return @@ -74,7 +102,7 @@ class Timedelta(ModelType): for (name, param) in parts.iteritems(): if param: time_params[name] = int(param) - return timedelta(**time_params) + return datetime.timedelta(**time_params) def to_json(self, value): values = [] @@ -83,42 +111,3 @@ class Timedelta(ModelType): if cur_value > 0: values.append("%d %s" % (cur_value, attr)) return ' '.join(values) - - -class StringyInteger(Integer): - """ - A model type that converts from strings to integers when reading from json. - If value does not parse as an int, returns None. - """ - def from_json(self, value): - try: - return int(value) - except: - return None - - -class StringyFloat(Float): - """ - A model type that converts from string to floats when reading from json. - If value does not parse as a float, returns None. - """ - def from_json(self, value): - try: - return float(value) - except: - return None - - -class StringyBoolean(Boolean): - """ - Reads strings from JSON as booleans. - - If the string is 'true' (case insensitive), then return True, - otherwise False. - - JSON values that aren't strings are returned as-is. - """ - def from_json(self, value): - if isinstance(value, basestring): - return value.lower() == 'true' - return value diff --git a/common/lib/xmodule/xmodule/foldit_module.py b/common/lib/xmodule/xmodule/foldit_module.py index 62c5ea416e..fdab14b58d 100644 --- a/common/lib/xmodule/xmodule/foldit_module.py +++ b/common/lib/xmodule/xmodule/foldit_module.py @@ -8,7 +8,6 @@ from xmodule.x_module import XModule from xmodule.xml_module import XmlDescriptor from xblock.core import Scope, Integer, String from .fields import Date -from xmodule.util.date_utils import time_to_datetime log = logging.getLogger(__name__) @@ -31,9 +30,7 @@ class FolditModule(FolditFields, XModule): css = {'scss': [resource_string(__name__, 'css/foldit/leaderboard.scss')]} def __init__(self, *args, **kwargs): - XModule.__init__(self, *args, **kwargs) """ - Example: """ - - self.due_time = time_to_datetime(self.due) + XModule.__init__(self, *args, **kwargs) + self.due_time = self.due def is_complete(self): """ @@ -102,7 +99,7 @@ class FolditModule(FolditFields, XModule): from foldit.models import Score leaders = [(e['username'], e['score']) for e in Score.get_tops_n(10)] - leaders.sort(key=lambda x: -x[1]) + leaders.sort(key=lambda x:-x[1]) return leaders @@ -186,7 +183,6 @@ class FolditDescriptor(FolditFields, XmlDescriptor, EditingDescriptor): module_class = FolditModule filename_extension = "xml" - stores_state = True has_score = True template_dir_name = "foldit" diff --git a/common/lib/xmodule/xmodule/js/fixtures/videoalpha.html b/common/lib/xmodule/xmodule/js/fixtures/videoalpha.html new file mode 100644 index 0000000000..bccf5df2cc --- /dev/null +++ b/common/lib/xmodule/xmodule/js/fixtures/videoalpha.html @@ -0,0 +1,23 @@ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file diff --git a/common/lib/xmodule/xmodule/js/fixtures/videoalpha_html5.html b/common/lib/xmodule/xmodule/js/fixtures/videoalpha_html5.html new file mode 100644 index 0000000000..6088d07f2b --- /dev/null +++ b/common/lib/xmodule/xmodule/js/fixtures/videoalpha_html5.html @@ -0,0 +1,27 @@ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file diff --git a/common/lib/xmodule/xmodule/js/spec/helper.coffee b/common/lib/xmodule/xmodule/js/spec/helper.coffee index 5cf75366d8..5f7fc27be0 100644 --- a/common/lib/xmodule/xmodule/js/spec/helper.coffee +++ b/common/lib/xmodule/xmodule/js/spec/helper.coffee @@ -20,10 +20,25 @@ jasmine.stubbedMetadata = bogus: duration: 100 +jasmine.fireEvent = (el, eventName) -> + if document.createEvent + event = document.createEvent "HTMLEvents" + event.initEvent eventName, true, true + else + event = document.createEventObject() + event.eventType = eventName + event.eventName = eventName + if document.createEvent + el.dispatchEvent(event) + else + el.fireEvent("on" + event.eventType, event) + jasmine.stubbedCaption = start: [0, 10000, 20000, 30000] text: ['Caption at 0', 'Caption at 10000', 'Caption at 20000', 'Caption at 30000'] +jasmine.stubbedHtml5Speeds = ['0.75', '1.0', '1.25', '1.50'] + jasmine.stubRequests = -> spyOn($, 'ajax').andCallFake (settings) -> if match = settings.url.match /youtube\.com\/.+\/videos\/(.+)\?v=2&alt=jsonc/ @@ -41,9 +56,12 @@ jasmine.stubRequests = -> throw "External request attempted for #{settings.url}, which is not defined." jasmine.stubYoutubePlayer = -> - YT.Player = -> jasmine.createSpyObj 'YT.Player', ['cueVideoById', 'getVideoEmbedCode', + YT.Player = -> + obj = jasmine.createSpyObj 'YT.Player', ['cueVideoById', 'getVideoEmbedCode', 'getCurrentTime', 'getPlayerState', 'getVolume', 'setVolume', 'loadVideoById', - 'playVideo', 'pauseVideo', 'seekTo'] + 'playVideo', 'pauseVideo', 'seekTo', 'getDuration', 'getAvailablePlaybackRates', 'setPlaybackRate'] + obj['getAvailablePlaybackRates'] = jasmine.createSpy('getAvailablePlaybackRates').andReturn [0.75, 1.0, 1.25, 1.5] + obj jasmine.stubVideoPlayer = (context, enableParts, createPlayer=true) -> enableParts = [enableParts] unless $.isArray(enableParts) @@ -60,6 +78,21 @@ jasmine.stubVideoPlayer = (context, enableParts, createPlayer=true) -> if createPlayer return new VideoPlayer(video: context.video) +jasmine.stubVideoPlayerAlpha = (context, enableParts, createPlayer=true, html5=false) -> + suite = context.suite + currentPartName = suite.description while suite = suite.parentSuite + if html5 == false + loadFixtures 'videoalpha.html' + else + loadFixtures 'videoalpha_html5.html' + jasmine.stubRequests() + YT.Player = undefined + window.OldVideoPlayerAlpha = undefined + context.video = new VideoAlpha '#example', '.75:slowerSpeedYoutubeId,1.0:normalSpeedYoutubeId' + jasmine.stubYoutubePlayer() + if createPlayer + return new VideoPlayerAlpha(video: context.video) + # Stub jQuery.cookie $.cookie = jasmine.createSpy('jQuery.cookie').andReturn '1.0' diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/html5_video.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/html5_video.coffee new file mode 100644 index 0000000000..176ceb7827 --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/html5_video.coffee @@ -0,0 +1,311 @@ +describe 'VideoAlpha HTML5Video', -> + playbackRates = [0.75, 1.0, 1.25, 1.5] + STATUS = window.YT.PlayerState + playerVars = + controls: 0 + wmode: 'transparent' + rel: 0 + showinfo: 0 + enablejsapi: 1 + modestbranding: 1 + html5: 1 + file = window.location.href.replace(/\/common(.*)$/, '') + '/test_root/data/videoalpha/gizmo' + html5Sources = + mp4: "#{file}.mp4" + webm: "#{file}.webm" + ogg: "#{file}.ogv" + onReady = jasmine.createSpy 'onReady' + onStateChange = jasmine.createSpy 'onStateChange' + + beforeEach -> + loadFixtures 'videoalpha_html5.html' + @el = $('#example').find('.video') + @player = new window.HTML5Video.Player @el, + playerVars: playerVars, + videoSources: html5Sources, + events: + onReady: onReady + onStateChange: onStateChange + + @videoEl = @el.find('.video-player video').get(0) + + it 'PlayerState', -> + expect(HTML5Video.PlayerState).toEqual STATUS + + describe 'constructor', -> + it 'create an html5 video element', -> + expect(@el.find('.video-player div')).toContain 'video' + + it 'check if sources are created in correct way', -> + sources = $(@videoEl).find('source') + videoTypes = [] + videoSources = [] + $.each html5Sources, (index, source) -> + videoTypes.push index + videoSources.push source + $.each sources, (index, source) -> + s = $(source) + expect($.inArray(s.attr('src'), videoSources)).not.toEqual -1 + expect($.inArray(s.attr('type').replace('video/', ''), videoTypes)) + .not.toEqual -1 + + it 'check if click event is handled on the player', -> + expect(@videoEl).toHandle 'click' + + # NOTE: According to + # + # https://github.com/ariya/phantomjs/wiki/Supported-Web-Standards#unsupported-features + # + # Video and Audio (due to the nature of PhantomJS) are not supported. After discussion + # with William Daly, some tests are disabled (Jenkins uses phantomjs for running tests + # and those tests fail). + # + # During code review, please enable the test below (change "xdescribe" to "describe" + # to enable the test). + xdescribe 'events:', -> + + beforeEach -> + spyOn(@player, 'callStateChangeCallback').andCallThrough() + + describe 'click', -> + describe 'when player is paused', -> + beforeEach -> + spyOn(@videoEl, 'play').andCallThrough() + @player.playerState = STATUS.PAUSED + $(@videoEl).trigger('click') + + it 'native play event was called', -> + expect(@videoEl.play).toHaveBeenCalled() + + it 'player state was changed', -> + expect(@player.playerState).toBe STATUS.PLAYING + + it 'callback was called', -> + expect(@player.callStateChangeCallback).toHaveBeenCalled() + + describe 'when player is played', -> + + beforeEach -> + spyOn(@videoEl, 'pause').andCallThrough() + @player.playerState = STATUS.PLAYING + $(@videoEl).trigger('click') + + it 'native pause event was called', -> + expect(@videoEl.pause).toHaveBeenCalled() + + it 'player state was changed', -> + expect(@player.playerState).toBe STATUS.PAUSED + + it 'callback was called', -> + expect(@player.callStateChangeCallback).toHaveBeenCalled() + + describe 'play', -> + + beforeEach -> + spyOn(@videoEl, 'play').andCallThrough() + @player.playerState = STATUS.PAUSED + @videoEl.play() + + it 'native event was called', -> + expect(@videoEl.play).toHaveBeenCalled() + + it 'player state was changed', -> + waitsFor ( -> + @player.playerState != HTML5Video.PlayerState.PAUSED + ), 'Player state should be changed', 1000 + + runs -> + expect(@player.playerState).toBe STATUS.PLAYING + + it 'callback was called', -> + waitsFor ( -> + @player.playerState != STATUS.PAUSED + ), 'Player state should be changed', 1000 + + runs -> + expect(@player.callStateChangeCallback).toHaveBeenCalled() + + describe 'pause', -> + + beforeEach -> + spyOn(@videoEl, 'pause').andCallThrough() + @videoEl.play() + @videoEl.pause() + + it 'native event was called', -> + expect(@videoEl.pause).toHaveBeenCalled() + + it 'player state was changed', -> + waitsFor ( -> + @player.playerState != STATUS.UNSTARTED + ), 'Player state should be changed', 1000 + + runs -> + expect(@player.playerState).toBe STATUS.PAUSED + + it 'callback was called', -> + waitsFor ( -> + @player.playerState != HTML5Video.PlayerState.UNSTARTED + ), 'Player state should be changed', 1000 + + runs -> + expect(@player.callStateChangeCallback).toHaveBeenCalled() + + describe 'canplay', -> + + beforeEach -> + waitsFor ( -> + @player.playerState != STATUS.UNSTARTED + ), 'Video cannot be played', 1000 + + it 'player state was changed', -> + runs -> + expect(@player.playerState).toBe STATUS.PAUSED + + it 'end property was defined', -> + runs -> + expect(@player.end).not.toBeNull() + + it 'start position was defined', -> + runs -> + expect(@videoEl.currentTime).toBe(@player.start) + + it 'callback was called', -> + runs -> + expect(@player.config.events.onReady).toHaveBeenCalled() + + describe 'ended', -> + beforeEach -> + waitsFor ( -> + @player.playerState != STATUS.UNSTARTED + ), 'Video cannot be played', 1000 + + it 'player state was changed', -> + runs -> + jasmine.fireEvent @videoEl, "ended" + expect(@player.playerState).toBe STATUS.ENDED + + it 'callback was called', -> + jasmine.fireEvent @videoEl, "ended" + expect(@player.callStateChangeCallback).toHaveBeenCalled() + + describe 'timeupdate', -> + + beforeEach -> + spyOn(@videoEl, 'pause').andCallThrough() + waitsFor ( -> + @player.playerState != STATUS.UNSTARTED + ), 'Video cannot be played', 1000 + + it 'player should be paused', -> + runs -> + @player.end = 3 + @videoEl.currentTime = 5 + jasmine.fireEvent @videoEl, "timeupdate" + expect(@videoEl.pause).toHaveBeenCalled() + + it 'end param should be re-defined', -> + runs -> + @player.end = 3 + @videoEl.currentTime = 5 + jasmine.fireEvent @videoEl, "timeupdate" + expect(@player.end).toBe @videoEl.duration + + # NOTE: According to + # + # https://github.com/ariya/phantomjs/wiki/Supported-Web-Standards#unsupported-features + # + # Video and Audio (due to the nature of PhantomJS) are not supported. After discussion + # with William Daly, some tests are disabled (Jenkins uses phantomjs for running tests + # and those tests fail). + # + # During code review, please enable the test below (change "xdescribe" to "describe" + # to enable the test). + xdescribe 'methods:', -> + + beforeEach -> + waitsFor ( -> + @volume = @videoEl.volume + @seek = @videoEl.currentTime + @player.playerState == STATUS.PAUSED + ), 'Video cannot be played', 1000 + + + it 'pauseVideo', -> + spyOn(@videoEl, 'pause').andCallThrough() + @player.pauseVideo() + expect(@videoEl.pause).toHaveBeenCalled() + + describe 'seekTo', -> + + it 'set new correct value', -> + runs -> + @player.seekTo(2) + expect(@videoEl.currentTime).toBe 2 + + it 'set new inccorrect values', -> + runs -> + @player.seekTo(-50) + expect(@videoEl.currentTime).toBe @seek + @player.seekTo('5') + expect(@videoEl.currentTime).toBe @seek + @player.seekTo(500000) + expect(@videoEl.currentTime).toBe @seek + + describe 'setVolume', -> + + it 'set new correct value', -> + runs -> + @player.setVolume(50) + expect(@videoEl.volume).toBe 50*0.01 + + it 'set new inccorrect values', -> + runs -> + @player.setVolume(-50) + expect(@videoEl.volume).toBe @volume + @player.setVolume('5') + expect(@videoEl.volume).toBe @volume + @player.setVolume(500000) + expect(@videoEl.volume).toBe @volume + + it 'getCurrentTime', -> + runs -> + @videoEl.currentTime = 3 + expect(@player.getCurrentTime()).toBe @videoEl.currentTime + + it 'playVideo', -> + runs -> + spyOn(@videoEl, 'play').andCallThrough() + @player.playVideo() + expect(@videoEl.play).toHaveBeenCalled() + + it 'getPlayerState', -> + runs -> + @player.playerState = STATUS.PLAYING + expect(@player.getPlayerState()).toBe STATUS.PLAYING + @player.playerState = STATUS.ENDED + expect(@player.getPlayerState()).toBe STATUS.ENDED + + it 'getVolume', -> + runs -> + @volume = @videoEl.volume = 0.5 + expect(@player.getVolume()).toBe @volume + + it 'getDuration', -> + runs -> + @duration = @videoEl.duration + expect(@player.getDuration()).toBe @duration + + describe 'setPlaybackRate', -> + it 'set a correct value', -> + @playbackRate = 1.5 + @player.setPlaybackRate @playbackRate + expect(@videoEl.playbackRate).toBe @playbackRate + + it 'set NaN value', -> + @playbackRate = NaN + @player.setPlaybackRate @playbackRate + expect(@videoEl.playbackRate).toBe 1.0 + + it 'getAvailablePlaybackRates', -> + expect(@player.getAvailablePlaybackRates()).toEqual playbackRates diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_caption_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_caption_spec.coffee new file mode 100644 index 0000000000..4bd237b81d --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_caption_spec.coffee @@ -0,0 +1,373 @@ +describe 'VideoCaptionAlpha', -> + + beforeEach -> + spyOn(VideoCaptionAlpha.prototype, 'fetchCaption').andCallThrough() + spyOn($, 'ajaxWithPrefix').andCallThrough() + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + + afterEach -> + YT.Player = undefined + $.fn.scrollTo.reset() + $('.subtitles').remove() + + describe 'constructor', -> + + describe 'always', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + it 'set the youtube id', -> + expect(@caption.youtubeId).toEqual 'normalSpeedYoutubeId' + + it 'create the caption element', -> + expect($('.video')).toContain 'ol.subtitles' + + it 'add caption control to video player', -> + expect($('.video')).toContain 'a.hide-subtitles' + + it 'fetch the caption', -> + expect(@caption.loaded).toBeTruthy() + expect(@caption.fetchCaption).toHaveBeenCalled() + expect($.ajaxWithPrefix).toHaveBeenCalledWith + url: @caption.captionURL() + notifyOnError: false + success: jasmine.any(Function) + + it 'bind window resize event', -> + expect($(window)).toHandleWith 'resize', @caption.resize + + it 'bind the hide caption button', -> + expect($('.hide-subtitles')).toHandleWith 'click', @caption.toggle + + it 'bind the mouse movement', -> + expect($('.subtitles')).toHandleWith 'mouseover', @caption.onMouseEnter + expect($('.subtitles')).toHandleWith 'mouseout', @caption.onMouseLeave + expect($('.subtitles')).toHandleWith 'mousemove', @caption.onMovement + expect($('.subtitles')).toHandleWith 'mousewheel', @caption.onMovement + expect($('.subtitles')).toHandleWith 'DOMMouseScroll', @caption.onMovement + + describe 'when on a non touch-based device', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + it 'render the caption', -> + captionsData = jasmine.stubbedCaption + $('.subtitles li[data-index]').each (index, link) => + expect($(link)).toHaveData 'index', index + expect($(link)).toHaveData 'start', captionsData.start[index] + expect($(link)).toHaveText captionsData.text[index] + + it 'add a padding element to caption', -> + expect($('.subtitles li:first')).toBe '.spacing' + expect($('.subtitles li:last')).toBe '.spacing' + + it 'bind all the caption link', -> + $('.subtitles li[data-index]').each (index, link) => + expect($(link)).toHandleWith 'click', @caption.seekPlayer + + it 'set rendered to true', -> + expect(@caption.rendered).toBeTruthy() + + describe 'when on a touch-based device', -> + + beforeEach -> + window.onTouchBasedDevice.andReturn true + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + it 'show explaination message', -> + expect($('.subtitles li')).toHaveHtml "Caption will be displayed when you start playing the video." + + it 'does not set rendered to true', -> + expect(@caption.rendered).toBeFalsy() + + describe 'mouse movement', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + window.setTimeout.andReturn(100) + spyOn window, 'clearTimeout' + + describe 'when cursor is outside of the caption box', -> + + beforeEach -> + $(window).trigger jQuery.Event 'mousemove' + + it 'does not set freezing timeout', -> + expect(@caption.frozen).toBeFalsy() + + describe 'when cursor is in the caption box', -> + + beforeEach -> + $('.subtitles').trigger jQuery.Event 'mouseenter' + + it 'set the freezing timeout', -> + expect(@caption.frozen).toEqual 100 + + describe 'when the cursor is moving', -> + beforeEach -> + $('.subtitles').trigger jQuery.Event 'mousemove' + + it 'reset the freezing timeout', -> + expect(window.clearTimeout).toHaveBeenCalledWith 100 + + describe 'when the mouse is scrolling', -> + beforeEach -> + $('.subtitles').trigger jQuery.Event 'mousewheel' + + it 'reset the freezing timeout', -> + expect(window.clearTimeout).toHaveBeenCalledWith 100 + + describe 'when cursor is moving out of the caption box', -> + beforeEach -> + @caption.frozen = 100 + $.fn.scrollTo.reset() + + describe 'always', -> + beforeEach -> + $('.subtitles').trigger jQuery.Event 'mouseout' + + it 'reset the freezing timeout', -> + expect(window.clearTimeout).toHaveBeenCalledWith 100 + + it 'unfreeze the caption', -> + expect(@caption.frozen).toBeNull() + + describe 'when the player is playing', -> + beforeEach -> + @caption.playing = true + $('.subtitles li[data-index]:first').addClass 'current' + $('.subtitles').trigger jQuery.Event 'mouseout' + + it 'scroll the caption', -> + expect($.fn.scrollTo).toHaveBeenCalled() + + describe 'when the player is not playing', -> + beforeEach -> + @caption.playing = false + $('.subtitles').trigger jQuery.Event 'mouseout' + + it 'does not scroll the caption', -> + expect($.fn.scrollTo).not.toHaveBeenCalled() + + describe 'search', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + it 'return a correct caption index', -> + expect(@caption.search(0)).toEqual 0 + expect(@caption.search(9999)).toEqual 0 + expect(@caption.search(10000)).toEqual 1 + expect(@caption.search(15000)).toEqual 1 + expect(@caption.search(30000)).toEqual 3 + expect(@caption.search(30001)).toEqual 3 + + describe 'play', -> + describe 'when the caption was not rendered', -> + beforeEach -> + window.onTouchBasedDevice.andReturn true + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + @caption.play() + + it 'render the caption', -> + captionsData = jasmine.stubbedCaption + $('.subtitles li[data-index]').each (index, link) => + expect($(link)).toHaveData 'index', index + expect($(link)).toHaveData 'start', captionsData.start[index] + expect($(link)).toHaveText captionsData.text[index] + + it 'add a padding element to caption', -> + expect($('.subtitles li:first')).toBe '.spacing' + expect($('.subtitles li:last')).toBe '.spacing' + + it 'bind all the caption link', -> + $('.subtitles li[data-index]').each (index, link) => + expect($(link)).toHandleWith 'click', @caption.seekPlayer + + it 'set rendered to true', -> + expect(@caption.rendered).toBeTruthy() + + it 'set playing to true', -> + expect(@caption.playing).toBeTruthy() + + describe 'pause', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + @caption.playing = true + @caption.pause() + + it 'set playing to false', -> + expect(@caption.playing).toBeFalsy() + + describe 'updatePlayTime', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + describe 'when the video speed is 1.0x', -> + beforeEach -> + @caption.currentSpeed = '1.0' + @caption.updatePlayTime 25.000 + + it 'search the caption based on time', -> + expect(@caption.currentIndex).toEqual 2 + + describe 'when the video speed is not 1.0x', -> + beforeEach -> + @caption.currentSpeed = '0.75' + @caption.updatePlayTime 25.000 + + it 'search the caption based on 1.0x speed', -> + expect(@caption.currentIndex).toEqual 1 + + describe 'when the index is not the same', -> + beforeEach -> + @caption.currentIndex = 1 + $('.subtitles li[data-index=1]').addClass 'current' + @caption.updatePlayTime 25.000 + + it 'deactivate the previous caption', -> + expect($('.subtitles li[data-index=1]')).not.toHaveClass 'current' + + it 'activate new caption', -> + expect($('.subtitles li[data-index=2]')).toHaveClass 'current' + + it 'save new index', -> + expect(@caption.currentIndex).toEqual 2 + + it 'scroll caption to new position', -> + expect($.fn.scrollTo).toHaveBeenCalled() + + describe 'when the index is the same', -> + beforeEach -> + @caption.currentIndex = 1 + $('.subtitles li[data-index=1]').addClass 'current' + @caption.updatePlayTime 15.000 + + it 'does not change current subtitle', -> + expect($('.subtitles li[data-index=1]')).toHaveClass 'current' + + describe 'resize', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + $('.subtitles li[data-index=1]').addClass 'current' + @caption.resize() + + it 'set the height of caption container', -> + expect(parseInt($('.subtitles').css('maxHeight'))).toBeCloseTo $('.video-wrapper').height(), 2 + + it 'set the height of caption spacing', -> + firstSpacing = Math.abs(parseInt($('.subtitles .spacing:first').css('height'))) + lastSpacing = Math.abs(parseInt($('.subtitles .spacing:last').css('height'))) + + expect(firstSpacing - @caption.topSpacingHeight()).toBeLessThan 1 + expect(lastSpacing - @caption.bottomSpacingHeight()).toBeLessThan 1 + + it 'scroll caption to new position', -> + expect($.fn.scrollTo).toHaveBeenCalled() + + describe 'scrollCaption', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + + describe 'when frozen', -> + beforeEach -> + @caption.frozen = true + $('.subtitles li[data-index=1]').addClass 'current' + @caption.scrollCaption() + + it 'does not scroll the caption', -> + expect($.fn.scrollTo).not.toHaveBeenCalled() + + describe 'when not frozen', -> + beforeEach -> + @caption.frozen = false + + describe 'when there is no current caption', -> + beforeEach -> + @caption.scrollCaption() + + it 'does not scroll the caption', -> + expect($.fn.scrollTo).not.toHaveBeenCalled() + + describe 'when there is a current caption', -> + beforeEach -> + $('.subtitles li[data-index=1]').addClass 'current' + @caption.scrollCaption() + + it 'scroll to current caption', -> + offset = -0.5 * ($('.video-wrapper').height() - $('.subtitles .current:first').height()) + + expect($.fn.scrollTo).toHaveBeenCalledWith $('.subtitles .current:first', @caption.el), + offset: offset + + describe 'seekPlayer', -> + + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @caption = @player.caption + $(@caption).bind 'seek', (event, time) => @time = time + + describe 'when the video speed is 1.0x', -> + beforeEach -> + @caption.currentSpeed = '1.0' + $('.subtitles li[data-start="30000"]').trigger('click') + + it 'trigger seek event with the correct time', -> + expect(@player.currentTime).toEqual 30.000 + + describe 'when the video speed is not 1.0x', -> + beforeEach -> + @caption.currentSpeed = '0.75' + $('.subtitles li[data-start="30000"]').trigger('click') + + it 'trigger seek event with the correct time', -> + expect(@player.currentTime).toEqual 40.000 + + describe 'toggle', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + spyOn @video, 'log' + @caption = @player.caption + $('.subtitles li[data-index=1]').addClass 'current' + + describe 'when the caption is visible', -> + beforeEach -> + @caption.el.removeClass 'closed' + @caption.toggle jQuery.Event('click') + + it 'log the hide_transcript event', -> + expect(@video.log).toHaveBeenCalledWith 'hide_transcript', + currentTime: @player.currentTime + + it 'hide the caption', -> + expect(@caption.el).toHaveClass 'closed' + + describe 'when the caption is hidden', -> + beforeEach -> + @caption.el.addClass 'closed' + @caption.toggle jQuery.Event('click') + + it 'log the show_transcript event', -> + expect(@video.log).toHaveBeenCalledWith 'show_transcript', + currentTime: @player.currentTime + + it 'show the caption', -> + expect(@caption.el).not.toHaveClass 'closed' + + it 'scroll the caption', -> + expect($.fn.scrollTo).toHaveBeenCalled() diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_control_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_control_spec.coffee new file mode 100644 index 0000000000..a4dc8739d8 --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_control_spec.coffee @@ -0,0 +1,103 @@ +describe 'VideoControlAlpha', -> + beforeEach -> + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + loadFixtures 'videoalpha.html' + $('.video-controls').html '' + + describe 'constructor', -> + + it 'render the video controls', -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + expect($('.video-controls')).toContain + ['.slider', 'ul.vcr', 'a.play', '.vidtime', '.add-fullscreen'].join(',') + expect($('.video-controls').find('.vidtime')).toHaveText '0:00 / 0:00' + + it 'bind the playback button', -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + expect($('.video_control')).toHandleWith 'click', @control.togglePlayback + + describe 'when on a touch based device', -> + beforeEach -> + window.onTouchBasedDevice.andReturn true + @control = new window.VideoControlAlpha(el: $('.video-controls')) + + it 'does not add the play class to video control', -> + expect($('.video_control')).not.toHaveClass 'play' + expect($('.video_control')).not.toHaveHtml 'Play' + + + describe 'when on a non-touch based device', -> + + beforeEach -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + + it 'add the play class to video control', -> + expect($('.video_control')).toHaveClass 'play' + expect($('.video_control')).toHaveHtml 'Play' + + describe 'play', -> + + beforeEach -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + @control.play() + + it 'switch playback button to play state', -> + expect($('.video_control')).not.toHaveClass 'play' + expect($('.video_control')).toHaveClass 'pause' + expect($('.video_control')).toHaveHtml 'Pause' + + describe 'pause', -> + + beforeEach -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + @control.pause() + + it 'switch playback button to pause state', -> + expect($('.video_control')).not.toHaveClass 'pause' + expect($('.video_control')).toHaveClass 'play' + expect($('.video_control')).toHaveHtml 'Play' + + describe 'togglePlayback', -> + + beforeEach -> + @control = new window.VideoControlAlpha(el: $('.video-controls')) + + describe 'when the control does not have play or pause class', -> + beforeEach -> + $('.video_control').removeClass('play').removeClass('pause') + + describe 'when the video is playing', -> + beforeEach -> + $('.video_control').addClass('play') + spyOnEvent @control, 'pause' + @control.togglePlayback jQuery.Event('click') + + it 'does not trigger the pause event', -> + expect('pause').not.toHaveBeenTriggeredOn @control + + describe 'when the video is paused', -> + beforeEach -> + $('.video_control').addClass('pause') + spyOnEvent @control, 'play' + @control.togglePlayback jQuery.Event('click') + + it 'does not trigger the play event', -> + expect('play').not.toHaveBeenTriggeredOn @control + + describe 'when the video is playing', -> + beforeEach -> + spyOnEvent @control, 'pause' + $('.video_control').addClass 'pause' + @control.togglePlayback jQuery.Event('click') + + it 'trigger the pause event', -> + expect('pause').toHaveBeenTriggeredOn @control + + describe 'when the video is paused', -> + beforeEach -> + spyOnEvent @control, 'play' + $('.video_control').addClass 'play' + @control.togglePlayback jQuery.Event('click') + + it 'trigger the play event', -> + expect('play').toHaveBeenTriggeredOn @control diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_player_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_player_spec.coffee new file mode 100644 index 0000000000..e9a5ca30b4 --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_player_spec.coffee @@ -0,0 +1,561 @@ +describe 'VideoPlayerAlpha', -> + playerVars = + controls: 0 + wmode: 'transparent' + rel: 0 + showinfo: 0 + enablejsapi: 1 + modestbranding: 1 + html5: 1 + + beforeEach -> + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + # It tries to call methods of VideoProgressSlider on Spy + for part in ['VideoCaptionAlpha', 'VideoSpeedControlAlpha', 'VideoVolumeControlAlpha', 'VideoProgressSliderAlpha', 'VideoControlAlpha'] + spyOn(window[part].prototype, 'initialize').andCallThrough() + + + afterEach -> + YT.Player = undefined + + describe 'constructor', -> + beforeEach -> + $.fn.qtip.andCallFake -> + $(this).data('qtip', true) + + describe 'always', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + + it 'instanticate current time to zero', -> + expect(@player.currentTime).toEqual 0 + + it 'set the element', -> + expect(@player.el).toHaveId 'video_id' + + it 'create video control', -> + expect(window.VideoControlAlpha.prototype.initialize).toHaveBeenCalled() + expect(@player.control).toBeDefined() + expect(@player.control.el).toBe $('.video-controls', @player.el) + + it 'create video caption', -> + expect(window.VideoCaptionAlpha.prototype.initialize).toHaveBeenCalled() + expect(@player.caption).toBeDefined() + expect(@player.caption.el).toBe @player.el + expect(@player.caption.youtubeId).toEqual 'normalSpeedYoutubeId' + expect(@player.caption.currentSpeed).toEqual '1.0' + expect(@player.caption.captionAssetPath).toEqual '/static/subs/' + + it 'create video speed control', -> + expect(window.VideoSpeedControlAlpha.prototype.initialize).toHaveBeenCalled() + expect(@player.speedControl).toBeDefined() + expect(@player.speedControl.el).toBe $('.secondary-controls', @player.el) + expect(@player.speedControl.speeds).toEqual ['0.75', '1.0'] + expect(@player.speedControl.currentSpeed).toEqual '1.0' + + it 'create video progress slider', -> + expect(window.VideoSpeedControlAlpha.prototype.initialize).toHaveBeenCalled() + expect(@player.progressSlider).toBeDefined() + expect(@player.progressSlider.el).toBe $('.slider', @player.el) + + it 'bind to video control play event', -> + expect($(@player.control)).toHandleWith 'play', @player.play + + it 'bind to video control pause event', -> + expect($(@player.control)).toHandleWith 'pause', @player.pause + + it 'bind to video caption seek event', -> + expect($(@player.caption)).toHandleWith 'caption_seek', @player.onSeek + + it 'bind to video speed control speedChange event', -> + expect($(@player.speedControl)).toHandleWith 'speedChange', @player.onSpeedChange + + it 'bind to video progress slider seek event', -> + expect($(@player.progressSlider)).toHandleWith 'slide_seek', @player.onSeek + + it 'bind to video volume control volumeChange event', -> + expect($(@player.volumeControl)).toHandleWith 'volumeChange', @player.onVolumeChange + + it 'bind to key press', -> + expect($(document.documentElement)).toHandleWith 'keyup', @player.bindExitFullScreen + + it 'bind to fullscreen switching button', -> + expect($('.add-fullscreen')).toHandleWith 'click', @player.toggleFullScreen + + it 'create Youtube player', -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + spyOn YT, 'Player' + @player = new VideoPlayerAlpha video: @video + expect(YT.Player).toHaveBeenCalledWith('id', { + playerVars: playerVars + videoId: 'normalSpeedYoutubeId' + events: + onReady: @player.onReady + onStateChange: @player.onStateChange + onPlaybackQualityChange: @player.onPlaybackQualityChange + }) + + it 'create HTML5 player', -> + jasmine.stubVideoPlayerAlpha @, [], false, true + spyOn HTML5Video, 'Player' + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + expect(HTML5Video.Player).toHaveBeenCalledWith @video.el, + playerVars: playerVars + videoSources: @video.html5Sources + events: + onReady: @player.onReady + onStateChange: @player.onStateChange + + describe 'when not on a touch based device', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + $('.add-fullscreen, .hide-subtitles').removeData 'qtip' + @player = new VideoPlayerAlpha video: @video + + it 'add the tooltip to fullscreen and subtitle button', -> + expect($('.add-fullscreen')).toHaveData 'qtip' + expect($('.hide-subtitles')).toHaveData 'qtip' + + it 'create video volume control', -> + expect(window.VideoVolumeControlAlpha.prototype.initialize).toHaveBeenCalled() + expect(@player.volumeControl).toBeDefined() + expect(@player.volumeControl.el).toBe $('.secondary-controls', @player.el) + + describe 'when on a touch based device', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + window.onTouchBasedDevice.andReturn true + $('.add-fullscreen, .hide-subtitles').removeData 'qtip' + @player = new VideoPlayerAlpha video: @video + + it 'does not add the tooltip to fullscreen and subtitle button', -> + expect($('.add-fullscreen')).not.toHaveData 'qtip' + expect($('.hide-subtitles')).not.toHaveData 'qtip' + + it 'does not create video volume control', -> + expect(window.VideoVolumeControlAlpha.prototype.initialize).not.toHaveBeenCalled() + expect(@player.volumeControl).not.toBeDefined() + + describe 'onReady', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + spyOn @video, 'log' + $('.video').append $('
') + @video.embed() + @player = @video.player + spyOnEvent @player, 'ready' + spyOnEvent @player, 'updatePlayTime' + @player.onReady() + + it 'log the load_video event', -> + expect(@video.log).toHaveBeenCalledWith 'load_video' + + describe 'when not on a touch based device', -> + beforeEach -> + spyOn @player, 'play' + @player.onReady() + + it 'autoplay the first video', -> + expect(@player.play).toHaveBeenCalled() + + describe 'when on a touch based device', -> + beforeEach -> + window.onTouchBasedDevice.andReturn true + spyOn @player, 'play' + @player.onReady() + + it 'does not autoplay the first video', -> + expect(@player.play).not.toHaveBeenCalled() + + describe 'onStateChange', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + + describe 'when the video is unstarted', -> + beforeEach -> + @player = new VideoPlayerAlpha video: @video + spyOn @player.control, 'pause' + @player.caption.pause = jasmine.createSpy('VideoCaptionAlpha.pause') + @player.onStateChange data: YT.PlayerState.UNSTARTED + + it 'pause the video control', -> + expect(@player.control.pause).toHaveBeenCalled() + + it 'pause the video caption', -> + expect(@player.caption.pause).toHaveBeenCalled() + + describe 'when the video is playing', -> + beforeEach -> + @anotherPlayer = jasmine.createSpyObj 'AnotherPlayer', ['onPause'] + window.OldVideoPlayerAlpha = @anotherPlayer + @player = new VideoPlayerAlpha video: @video + spyOn @video, 'log' + spyOn(window, 'setInterval').andReturn 100 + spyOn @player.control, 'play' + @player.caption.play = jasmine.createSpy('VideoCaptionAlpha.play') + @player.progressSlider.play = jasmine.createSpy('VideoProgressSliderAlpha.play') + @player.player.getVideoEmbedCode.andReturn 'embedCode' + @player.onStateChange data: YT.PlayerState.PLAYING + + it 'log the play_video event', -> + expect(@video.log).toHaveBeenCalledWith 'play_video', {currentTime: 0} + + it 'pause other video player', -> + expect(@anotherPlayer.onPause).toHaveBeenCalled() + + it 'set current video player as active player', -> + expect(window.OldVideoPlayerAlpha).toEqual @player + + it 'set update interval', -> + expect(window.setInterval).toHaveBeenCalledWith @player.update, 200 + expect(@player.player.interval).toEqual 100 + + it 'play the video control', -> + expect(@player.control.play).toHaveBeenCalled() + + it 'play the video caption', -> + expect(@player.caption.play).toHaveBeenCalled() + + it 'play the video progress slider', -> + expect(@player.progressSlider.play).toHaveBeenCalled() + + describe 'when the video is paused', -> + beforeEach -> + @player = new VideoPlayerAlpha video: @video + spyOn @video, 'log' + spyOn window, 'clearInterval' + spyOn @player.control, 'pause' + @player.caption.pause = jasmine.createSpy('VideoCaptionAlpha.pause') + @player.player.interval = 100 + @player.player.getVideoEmbedCode.andReturn 'embedCode' + @player.onStateChange data: YT.PlayerState.PAUSED + + it 'log the pause_video event', -> + expect(@video.log).toHaveBeenCalledWith 'pause_video', {currentTime: 0} + + it 'clear update interval', -> + expect(window.clearInterval).toHaveBeenCalledWith 100 + expect(@player.player.interval).toBeNull() + + it 'pause the video control', -> + expect(@player.control.pause).toHaveBeenCalled() + + it 'pause the video caption', -> + expect(@player.caption.pause).toHaveBeenCalled() + + describe 'when the video is ended', -> + beforeEach -> + @player = new VideoPlayerAlpha video: @video + spyOn @player.control, 'pause' + @player.caption.pause = jasmine.createSpy('VideoCaptionAlpha.pause') + @player.onStateChange data: YT.PlayerState.ENDED + + it 'pause the video control', -> + expect(@player.control.pause).toHaveBeenCalled() + + it 'pause the video caption', -> + expect(@player.caption.pause).toHaveBeenCalled() + + describe 'onSeek', -> + conf = [{ + desc : 'check if seek_video is logged with slide_seek type', + type: 'slide_seek', + obj: 'progressSlider' + },{ + desc : 'check if seek_video is logged with caption_seek type', + type: 'caption_seek', + obj: 'caption' + }] + + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + spyOn window, 'clearInterval' + @player.player.interval = 100 + spyOn @player, 'updatePlayTime' + spyOn @video, 'log' + + $.each conf, (key, value) -> + it value.desc, -> + type = value.type + old_time = 0 + new_time = 60 + $(@player[value.obj]).trigger value.type, new_time + expect(@video.log).toHaveBeenCalledWith 'seek_video', + old_time: old_time + new_time: new_time + type: value.type + + it 'seek the player', -> + $(@player.progressSlider).trigger 'slide_seek', 60 + expect(@player.player.seekTo).toHaveBeenCalledWith 60, true + + it 'call updatePlayTime on player', -> + $(@player.progressSlider).trigger 'slide_seek', 60 + expect(@player.updatePlayTime).toHaveBeenCalledWith 60 + + describe 'when the player is playing', -> + beforeEach -> + $(@player.progressSlider).trigger 'slide_seek', 60 + @player.player.getPlayerState.andReturn YT.PlayerState.PLAYING + @player.onSeek {}, 60 + + it 'reset the update interval', -> + expect(window.clearInterval).toHaveBeenCalledWith 100 + + describe 'when the player is not playing', -> + beforeEach -> + $(@player.progressSlider).trigger 'slide_seek', 60 + @player.player.getPlayerState.andReturn YT.PlayerState.PAUSED + @player.onSeek {}, 60 + + it 'set the current time', -> + expect(@player.currentTime).toEqual 60 + + describe 'onSpeedChange', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @player.currentTime = 60 + spyOn @player, 'updatePlayTime' + spyOn(@video, 'setSpeed').andCallThrough() + spyOn(@video, 'log') + + describe 'always', -> + beforeEach -> + @player.onSpeedChange {}, '0.75', false + + it 'check if speed_change_video is logged', -> + expect(@video.log).toHaveBeenCalledWith 'speed_change_video', + currentTime: @player.currentTime + old_speed: '1.0' + new_speed: '0.75' + + it 'convert the current time to the new speed', -> + expect(@player.currentTime).toEqual '80.000' + + it 'set video speed to the new speed', -> + expect(@video.setSpeed).toHaveBeenCalledWith '0.75', false + + it 'tell video caption that the speed has changed', -> + expect(@player.caption.currentSpeed).toEqual '0.75' + + describe 'when the video is playing', -> + beforeEach -> + @player.player.getPlayerState.andReturn YT.PlayerState.PLAYING + @player.onSpeedChange {}, '0.75' + + it 'load the video', -> + expect(@player.player.loadVideoById).toHaveBeenCalledWith 'slowerSpeedYoutubeId', '80.000' + + it 'trigger updatePlayTime event', -> + expect(@player.updatePlayTime).toHaveBeenCalledWith '80.000' + + describe 'when the video is not playing', -> + beforeEach -> + @player.player.getPlayerState.andReturn YT.PlayerState.PAUSED + @player.onSpeedChange {}, '0.75' + + it 'cue the video', -> + expect(@player.player.cueVideoById).toHaveBeenCalledWith 'slowerSpeedYoutubeId', '80.000' + + it 'trigger updatePlayTime event', -> + expect(@player.updatePlayTime).toHaveBeenCalledWith '80.000' + + describe 'onVolumeChange', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @player.onVolumeChange undefined, 60 + + it 'set the volume on player', -> + expect(@player.player.setVolume).toHaveBeenCalledWith 60 + + describe 'update', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + spyOn @player, 'updatePlayTime' + + describe 'when the current time is unavailable from the player', -> + beforeEach -> + @player.player.getCurrentTime.andReturn undefined + @player.update() + + it 'does not trigger updatePlayTime event', -> + expect(@player.updatePlayTime).not.toHaveBeenCalled() + + describe 'when the current time is available from the player', -> + beforeEach -> + @player.player.getCurrentTime.andReturn 60 + @player.update() + + it 'trigger updatePlayTime event', -> + expect(@player.updatePlayTime).toHaveBeenCalledWith(60) + + describe 'updatePlayTime', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + spyOn(@video, 'getDuration').andReturn 1800 + @player.caption.updatePlayTime = jasmine.createSpy('VideoCaptionAlpha.updatePlayTime') + @player.progressSlider.updatePlayTime = jasmine.createSpy('VideoProgressSliderAlpha.updatePlayTime') + @player.updatePlayTime 60 + + it 'update the video playback time', -> + expect($('.vidtime')).toHaveHtml '1:00 / 30:00' + + it 'update the playback time on caption', -> + expect(@player.caption.updatePlayTime).toHaveBeenCalledWith 60 + + it 'update the playback time on progress slider', -> + expect(@player.progressSlider.updatePlayTime).toHaveBeenCalledWith 60, 1800 + + describe 'toggleFullScreen', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @player.caption.resize = jasmine.createSpy('VideoCaptionAlpha.resize') + + describe 'when the video player is not full screen', -> + beforeEach -> + spyOn @video, 'log' + @player.el.removeClass 'fullscreen' + @player.toggleFullScreen(jQuery.Event("click")) + + it 'log the fullscreen event', -> + expect(@video.log).toHaveBeenCalledWith 'fullscreen', + currentTime: @player.currentTime + + it 'replace the full screen button tooltip', -> + expect($('.add-fullscreen')).toHaveAttr 'title', 'Exit fill browser' + + it 'add the fullscreen class', -> + expect(@player.el).toHaveClass 'fullscreen' + + it 'tell VideoCaption to resize', -> + expect(@player.caption.resize).toHaveBeenCalled() + + describe 'when the video player already full screen', -> + beforeEach -> + spyOn @video, 'log' + @player.el.addClass 'fullscreen' + @player.toggleFullScreen(jQuery.Event("click")) + + it 'log the not_fullscreen event', -> + expect(@video.log).toHaveBeenCalledWith 'not_fullscreen', + currentTime: @player.currentTime + + it 'replace the full screen button tooltip', -> + expect($('.add-fullscreen')).toHaveAttr 'title', 'Fill browser' + + it 'remove exit full screen button', -> + expect(@player.el).not.toContain 'a.exit' + + it 'remove the fullscreen class', -> + expect(@player.el).not.toHaveClass 'fullscreen' + + it 'tell VideoCaption to resize', -> + expect(@player.caption.resize).toHaveBeenCalled() + + describe 'play', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + + describe 'when the player is not ready', -> + beforeEach -> + @player.player.playVideo = undefined + @player.play() + + it 'does nothing', -> + expect(@player.player.playVideo).toBeUndefined() + + describe 'when the player is ready', -> + beforeEach -> + @player.player.playVideo.andReturn true + @player.play() + + it 'delegate to the Youtube player', -> + expect(@player.player.playVideo).toHaveBeenCalled() + + describe 'isPlaying', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + + describe 'when the video is playing', -> + beforeEach -> + @player.player.getPlayerState.andReturn YT.PlayerState.PLAYING + + it 'return true', -> + expect(@player.isPlaying()).toBeTruthy() + + describe 'when the video is not playing', -> + beforeEach -> + @player.player.getPlayerState.andReturn YT.PlayerState.PAUSED + + it 'return false', -> + expect(@player.isPlaying()).toBeFalsy() + + describe 'pause', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @player.pause() + + it 'delegate to the Youtube player', -> + expect(@player.player.pauseVideo).toHaveBeenCalled() + + describe 'duration', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + spyOn @video, 'getDuration' + @player.duration() + + it 'delegate to the video', -> + expect(@video.getDuration).toHaveBeenCalled() + + describe 'currentSpeed', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @video.speed = '3.0' + + it 'delegate to the video', -> + expect(@player.currentSpeed()).toEqual '3.0' + + describe 'volume', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @, [], false + $('.video').append $('
') + @player = new VideoPlayerAlpha video: @video + @player.player.getVolume.andReturn 42 + + describe 'without value', -> + it 'return current volume', -> + expect(@player.volume()).toEqual 42 + + describe 'with value', -> + it 'set player volume', -> + @player.volume(60) + expect(@player.player.setVolume).toHaveBeenCalledWith(60) diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_progress_slider_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_progress_slider_spec.coffee new file mode 100644 index 0000000000..dd787aefbb --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_progress_slider_spec.coffee @@ -0,0 +1,165 @@ +describe 'VideoProgressSliderAlpha', -> + beforeEach -> + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + + describe 'constructor', -> + describe 'on a non-touch based device', -> + beforeEach -> + spyOn($.fn, 'slider').andCallThrough() + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + + it 'build the slider', -> + expect(@progressSlider.slider).toBe '.slider' + expect($.fn.slider).toHaveBeenCalledWith + range: 'min' + change: @progressSlider.onChange + slide: @progressSlider.onSlide + stop: @progressSlider.onStop + + it 'build the seek handle', -> + expect(@progressSlider.handle).toBe '.slider .ui-slider-handle' + expect($.fn.qtip).toHaveBeenCalledWith + content: "0:00" + position: + my: 'bottom center' + at: 'top center' + container: @progressSlider.handle + hide: + delay: 700 + style: + classes: 'ui-tooltip-slider' + widget: true + + describe 'on a touch-based device', -> + beforeEach -> + window.onTouchBasedDevice.andReturn true + spyOn($.fn, 'slider').andCallThrough() + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + + it 'does not build the slider', -> + expect(@progressSlider.slider).toBeUndefined + expect($.fn.slider).not.toHaveBeenCalled() + + describe 'play', -> + beforeEach -> + spyOn(VideoProgressSliderAlpha.prototype, 'buildSlider').andCallThrough() + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + + describe 'when the slider was already built', -> + + beforeEach -> + @progressSlider.play() + + it 'does not build the slider', -> + expect(@progressSlider.buildSlider.calls.length).toEqual 1 + + describe 'when the slider was not already built', -> + beforeEach -> + spyOn($.fn, 'slider').andCallThrough() + @progressSlider.slider = null + @progressSlider.play() + + it 'build the slider', -> + expect(@progressSlider.slider).toBe '.slider' + expect($.fn.slider).toHaveBeenCalledWith + range: 'min' + change: @progressSlider.onChange + slide: @progressSlider.onSlide + stop: @progressSlider.onStop + + it 'build the seek handle', -> + expect(@progressSlider.handle).toBe '.ui-slider-handle' + expect($.fn.qtip).toHaveBeenCalledWith + content: "0:00" + position: + my: 'bottom center' + at: 'top center' + container: @progressSlider.handle + hide: + delay: 700 + style: + classes: 'ui-tooltip-slider' + widget: true + + describe 'updatePlayTime', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + + describe 'when frozen', -> + beforeEach -> + spyOn($.fn, 'slider').andCallThrough() + @progressSlider.frozen = true + @progressSlider.updatePlayTime 20, 120 + + it 'does not update the slider', -> + expect($.fn.slider).not.toHaveBeenCalled() + + describe 'when not frozen', -> + beforeEach -> + spyOn($.fn, 'slider').andCallThrough() + @progressSlider.frozen = false + @progressSlider.updatePlayTime 20, 120 + + it 'update the max value of the slider', -> + expect($.fn.slider).toHaveBeenCalledWith 'option', 'max', 120 + + it 'update current value of the slider', -> + expect($.fn.slider).toHaveBeenCalledWith 'value', 20 + + describe 'onSlide', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + spyOnEvent @progressSlider, 'slide_seek' + @progressSlider.onSlide {}, value: 20 + + it 'freeze the slider', -> + expect(@progressSlider.frozen).toBeTruthy() + + it 'update the tooltip', -> + expect($.fn.qtip).toHaveBeenCalled() + + it 'trigger seek event', -> + expect('slide_seek').toHaveBeenTriggeredOn @progressSlider + expect(@player.currentTime).toEqual 20 + + describe 'onChange', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + @progressSlider.onChange {}, value: 20 + + it 'update the tooltip', -> + expect($.fn.qtip).toHaveBeenCalled() + + describe 'onStop', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + spyOnEvent @progressSlider, 'slide_seek' + @progressSlider.onStop {}, value: 20 + + it 'freeze the slider', -> + expect(@progressSlider.frozen).toBeTruthy() + + it 'trigger seek event', -> + expect('slide_seek').toHaveBeenTriggeredOn @progressSlider + expect(@player.currentTime).toEqual 20 + + it 'set timeout to unfreeze the slider', -> + expect(window.setTimeout).toHaveBeenCalledWith jasmine.any(Function), 200 + window.setTimeout.mostRecentCall.args[0]() + expect(@progressSlider.frozen).toBeFalsy() + + describe 'updateTooltip', -> + beforeEach -> + @player = jasmine.stubVideoPlayerAlpha @ + @progressSlider = @player.progressSlider + @progressSlider.updateTooltip 90 + + it 'set the tooltip value', -> + expect($.fn.qtip).toHaveBeenCalledWith 'option', 'content.text', '1:30' diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_speed_control_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_speed_control_spec.coffee new file mode 100644 index 0000000000..ca4bfe815a --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_speed_control_spec.coffee @@ -0,0 +1,91 @@ +describe 'VideoSpeedControlAlpha', -> + beforeEach -> + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + jasmine.stubVideoPlayerAlpha @ + $('.speeds').remove() + + describe 'constructor', -> + describe 'always', -> + beforeEach -> + @speedControl = new VideoSpeedControlAlpha el: $('.secondary-controls'), speeds: @video.speeds, currentSpeed: '1.0' + + it 'add the video speed control to player', -> + secondaryControls = $('.secondary-controls') + li = secondaryControls.find('.video_speeds li') + expect(secondaryControls).toContain '.speeds' + expect(secondaryControls).toContain '.video_speeds' + expect(secondaryControls.find('p.active').text()).toBe '1.0x' + expect(li.filter('.active')).toHaveData 'speed', @speedControl.currentSpeed + expect(li.length).toBe @speedControl.speeds.length + $.each li.toArray().reverse(), (index, link) => + expect($(link)).toHaveData 'speed', @speedControl.speeds[index] + expect($(link).find('a').text()).toBe @speedControl.speeds[index] + 'x' + + it 'bind to change video speed link', -> + expect($('.video_speeds a')).toHandleWith 'click', @speedControl.changeVideoSpeed + + describe 'when running on touch based device', -> + beforeEach -> + window.onTouchBasedDevice.andReturn true + $('.speeds').removeClass 'open' + @speedControl = new VideoSpeedControlAlpha el: $('.secondary-controls'), speeds: @video.speeds, currentSpeed: '1.0' + + it 'open the speed toggle on click', -> + $('.speeds').click() + expect($('.speeds')).toHaveClass 'open' + $('.speeds').click() + expect($('.speeds')).not.toHaveClass 'open' + + describe 'when running on non-touch based device', -> + beforeEach -> + $('.speeds').removeClass 'open' + @speedControl = new VideoSpeedControlAlpha el: $('.secondary-controls'), speeds: @video.speeds, currentSpeed: '1.0' + + it 'open the speed toggle on hover', -> + $('.speeds').mouseenter() + expect($('.speeds')).toHaveClass 'open' + $('.speeds').mouseleave() + expect($('.speeds')).not.toHaveClass 'open' + + it 'close the speed toggle on mouse out', -> + $('.speeds').mouseenter().mouseleave() + expect($('.speeds')).not.toHaveClass 'open' + + it 'close the speed toggle on click', -> + $('.speeds').mouseenter().click() + expect($('.speeds')).not.toHaveClass 'open' + + describe 'changeVideoSpeed', -> + beforeEach -> + @speedControl = new VideoSpeedControlAlpha el: $('.secondary-controls'), speeds: @video.speeds, currentSpeed: '1.0' + @video.setSpeed '1.0' + + describe 'when new speed is the same', -> + beforeEach -> + spyOnEvent @speedControl, 'speedChange' + $('li[data-speed="1.0"] a').click() + + it 'does not trigger speedChange event', -> + expect('speedChange').not.toHaveBeenTriggeredOn @speedControl + + describe 'when new speed is not the same', -> + beforeEach -> + @newSpeed = null + $(@speedControl).bind 'speedChange', (event, newSpeed) => @newSpeed = newSpeed + spyOnEvent @speedControl, 'speedChange' + $('li[data-speed="0.75"] a').click() + + it 'trigger speedChange event', -> + expect('speedChange').toHaveBeenTriggeredOn @speedControl + expect(@newSpeed).toEqual 0.75 + + describe 'onSpeedChange', -> + beforeEach -> + @speedControl = new VideoSpeedControlAlpha el: $('.secondary-controls'), speeds: @video.speeds, currentSpeed: '1.0' + $('li[data-speed="1.0"] a').addClass 'active' + @speedControl.setSpeed '0.75' + + it 'set the new speed as active', -> + expect($('.video_speeds li[data-speed="1.0"]')).not.toHaveClass 'active' + expect($('.video_speeds li[data-speed="0.75"]')).toHaveClass 'active' + expect($('.speeds p.active')).toHaveHtml '0.75x' diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_volume_control_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_volume_control_spec.coffee new file mode 100644 index 0000000000..4bb9f1cbf8 --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display/video_volume_control_spec.coffee @@ -0,0 +1,94 @@ +describe 'VideoVolumeControlAlpha', -> + beforeEach -> + jasmine.stubVideoPlayerAlpha @ + $('.volume').remove() + + describe 'constructor', -> + beforeEach -> + spyOn($.fn, 'slider') + @volumeControl = new VideoVolumeControlAlpha el: $('.secondary-controls') + + it 'initialize currentVolume to 100', -> + expect(@volumeControl.currentVolume).toEqual 100 + + it 'render the volume control', -> + expect($('.secondary-controls').html()).toContain """ +
+ +
+
+
+
+ """ + + it 'create the slider', -> + expect($.fn.slider).toHaveBeenCalledWith + orientation: "vertical" + range: "min" + min: 0 + max: 100 + value: 100 + change: @volumeControl.onChange + slide: @volumeControl.onChange + + it 'bind the volume control', -> + expect($('.volume>a')).toHandleWith 'click', @volumeControl.toggleMute + + expect($('.volume')).not.toHaveClass 'open' + $('.volume').mouseenter() + expect($('.volume')).toHaveClass 'open' + $('.volume').mouseleave() + expect($('.volume')).not.toHaveClass 'open' + + describe 'onChange', -> + beforeEach -> + spyOnEvent @volumeControl, 'volumeChange' + @newVolume = undefined + @volumeControl = new VideoVolumeControlAlpha el: $('.secondary-controls') + $(@volumeControl).bind 'volumeChange', (event, volume) => @newVolume = volume + + describe 'when the new volume is more than 0', -> + beforeEach -> + @volumeControl.onChange undefined, value: 60 + + it 'set the player volume', -> + expect(@newVolume).toEqual 60 + + it 'remote muted class', -> + expect($('.volume')).not.toHaveClass 'muted' + + describe 'when the new volume is 0', -> + beforeEach -> + @volumeControl.onChange undefined, value: 0 + + it 'set the player volume', -> + expect(@newVolume).toEqual 0 + + it 'add muted class', -> + expect($('.volume')).toHaveClass 'muted' + + describe 'toggleMute', -> + beforeEach -> + @newVolume = undefined + @volumeControl = new VideoVolumeControlAlpha el: $('.secondary-controls') + $(@volumeControl).bind 'volumeChange', (event, volume) => @newVolume = volume + + describe 'when the current volume is more than 0', -> + beforeEach -> + @volumeControl.currentVolume = 60 + @volumeControl.toggleMute() + + it 'save the previous volume', -> + expect(@volumeControl.previousVolume).toEqual 60 + + it 'set the player volume', -> + expect(@newVolume).toEqual 0 + + describe 'when the current volume is 0', -> + beforeEach -> + @volumeControl.currentVolume = 0 + @volumeControl.previousVolume = 60 + @volumeControl.toggleMute() + + it 'set the player volume to previous volume', -> + expect(@newVolume).toEqual 60 diff --git a/common/lib/xmodule/xmodule/js/spec/videoalpha/display_spec.coffee b/common/lib/xmodule/xmodule/js/spec/videoalpha/display_spec.coffee new file mode 100644 index 0000000000..3715c3d813 --- /dev/null +++ b/common/lib/xmodule/xmodule/js/spec/videoalpha/display_spec.coffee @@ -0,0 +1,286 @@ +describe 'VideoAlpha', -> + metadata = + slowerSpeedYoutubeId: + id: @slowerSpeedYoutubeId + duration: 300 + normalSpeedYoutubeId: + id: @normalSpeedYoutubeId + duration: 200 + + beforeEach -> + jasmine.stubRequests() + window.onTouchBasedDevice = jasmine.createSpy('onTouchBasedDevice').andReturn false + @videosDefinition = '0.75:slowerSpeedYoutubeId,1.0:normalSpeedYoutubeId' + @slowerSpeedYoutubeId = 'slowerSpeedYoutubeId' + @normalSpeedYoutubeId = 'normalSpeedYoutubeId' + + afterEach -> + window.OldVideoPlayerAlpha = undefined + window.onYouTubePlayerAPIReady = undefined + window.onHTML5PlayerAPIReady = undefined + + describe 'constructor', -> + describe 'YT', -> + beforeEach -> + loadFixtures 'videoalpha.html' + @stubVideoPlayerAlpha = jasmine.createSpy('VideoPlayerAlpha') + $.cookie.andReturn '0.75' + + describe 'by default', -> + beforeEach -> + spyOn(window.VideoAlpha.prototype, 'fetchMetadata').andCallFake -> + @metadata = metadata + @video = new VideoAlpha '#example', @videosDefinition + + it 'check videoType', -> + expect(@video.videoType).toEqual('youtube') + + it 'reset the current video player', -> + expect(window.OldVideoPlayerAlpha).toBeUndefined() + + it 'set the elements', -> + expect(@video.el).toBe '#video_id' + + it 'parse the videos', -> + expect(@video.videos).toEqual + '0.75': @slowerSpeedYoutubeId + '1.0': @normalSpeedYoutubeId + + it 'fetch the video metadata', -> + expect(@video.fetchMetadata).toHaveBeenCalled + expect(@video.metadata).toEqual metadata + + it 'parse available video speeds', -> + expect(@video.speeds).toEqual ['0.75', '1.0'] + + it 'set current video speed via cookie', -> + expect(@video.speed).toEqual '0.75' + + it 'store a reference for this video player in the element', -> + expect($('.video').data('video')).toEqual @video + + describe 'when the Youtube API is already available', -> + beforeEach -> + @originalYT = window.YT + window.YT = { Player: true } + spyOn(window, 'VideoPlayerAlpha').andReturn(@stubVideoPlayerAlpha) + @video = new VideoAlpha '#example', @videosDefinition + + afterEach -> + window.YT = @originalYT + + it 'create the Video Player', -> + expect(window.VideoPlayerAlpha).toHaveBeenCalledWith(video: @video) + expect(@video.player).toEqual @stubVideoPlayerAlpha + + describe 'when the Youtube API is not ready', -> + beforeEach -> + @originalYT = window.YT + window.YT = {} + @video = new VideoAlpha '#example', @videosDefinition + + afterEach -> + window.YT = @originalYT + + it 'set the callback on the window object', -> + expect(window.onYouTubePlayerAPIReady).toEqual jasmine.any(Function) + + describe 'when the Youtube API becoming ready', -> + beforeEach -> + @originalYT = window.YT + window.YT = {} + spyOn(window, 'VideoPlayerAlpha').andReturn(@stubVideoPlayerAlpha) + @video = new VideoAlpha '#example', @videosDefinition + window.onYouTubePlayerAPIReady() + + afterEach -> + window.YT = @originalYT + + it 'create the Video Player for all video elements', -> + expect(window.VideoPlayerAlpha).toHaveBeenCalledWith(video: @video) + expect(@video.player).toEqual @stubVideoPlayerAlpha + + describe 'HTML5', -> + beforeEach -> + loadFixtures 'videoalpha_html5.html' + @stubVideoPlayerAlpha = jasmine.createSpy('VideoPlayerAlpha') + $.cookie.andReturn '0.75' + + describe 'by default', -> + beforeEach -> + @originalHTML5 = window.HTML5Video.Player + window.HTML5Video.Player = undefined + @video = new VideoAlpha '#example', @videosDefinition + + afterEach -> + window.HTML5Video.Player = @originalHTML5 + + it 'check videoType', -> + expect(@video.videoType).toEqual('html5') + + it 'reset the current video player', -> + expect(window.OldVideoPlayerAlpha).toBeUndefined() + + it 'set the elements', -> + expect(@video.el).toBe '#video_id' + + it 'parse the videos if subtitles exist', -> + sub = 'test_name_of_the_subtitles' + expect(@video.videos).toEqual + '0.75': sub + '1.0': sub + '1.25': sub + '1.5': sub + + it 'parse the videos if subtitles doesn\'t exist', -> + $('#example').find('.video').data('sub', '') + @video = new VideoAlpha '#example', @videosDefinition + sub = '' + expect(@video.videos).toEqual + '0.75': sub + '1.0': sub + '1.25': sub + '1.5': sub + + it 'parse Html5 sources', -> + html5Sources = + mp4: 'test.mp4' + webm: 'test.webm' + ogg: 'test.ogv' + expect(@video.html5Sources).toEqual html5Sources + + it 'parse available video speeds', -> + speeds = jasmine.stubbedHtml5Speeds + expect(@video.speeds).toEqual speeds + + it 'set current video speed via cookie', -> + expect(@video.speed).toEqual '0.75' + + it 'store a reference for this video player in the element', -> + expect($('.video').data('video')).toEqual @video + + describe 'when the HTML5 API is already available', -> + beforeEach -> + @originalHTML5Video = window.HTML5Video + window.HTML5Video = { Player: true } + spyOn(window, 'VideoPlayerAlpha').andReturn(@stubVideoPlayerAlpha) + @video = new VideoAlpha '#example', @videosDefinition + + afterEach -> + window.HTML5Video = @originalHTML5Video + + it 'create the Video Player', -> + expect(window.VideoPlayerAlpha).toHaveBeenCalledWith(video: @video) + expect(@video.player).toEqual @stubVideoPlayerAlpha + + describe 'when the HTML5 API is not ready', -> + beforeEach -> + @originalHTML5Video = window.HTML5Video + window.HTML5Video = {} + @video = new VideoAlpha '#example', @videosDefinition + + afterEach -> + window.HTML5Video = @originalHTML5Video + + it 'set the callback on the window object', -> + expect(window.onHTML5PlayerAPIReady).toEqual jasmine.any(Function) + + describe 'when the HTML5 API becoming ready', -> + beforeEach -> + @originalHTML5Video = window.HTML5Video + window.HTML5Video = {} + spyOn(window, 'VideoPlayerAlpha').andReturn(@stubVideoPlayerAlpha) + @video = new VideoAlpha '#example', @videosDefinition + window.onHTML5PlayerAPIReady() + + afterEach -> + window.HTML5Video = @originalHTML5Video + + it 'create the Video Player for all video elements', -> + expect(window.VideoPlayerAlpha).toHaveBeenCalledWith(video: @video) + expect(@video.player).toEqual @stubVideoPlayerAlpha + + describe 'youtubeId', -> + beforeEach -> + loadFixtures 'videoalpha.html' + $.cookie.andReturn '1.0' + @video = new VideoAlpha '#example', @videosDefinition + + describe 'with speed', -> + it 'return the video id for given speed', -> + expect(@video.youtubeId('0.75')).toEqual @slowerSpeedYoutubeId + expect(@video.youtubeId('1.0')).toEqual @normalSpeedYoutubeId + + describe 'without speed', -> + it 'return the video id for current speed', -> + expect(@video.youtubeId()).toEqual @normalSpeedYoutubeId + + describe 'setSpeed', -> + describe 'YT', -> + beforeEach -> + loadFixtures 'videoalpha.html' + @video = new VideoAlpha '#example', @videosDefinition + + describe 'when new speed is available', -> + beforeEach -> + @video.setSpeed '0.75' + + it 'set new speed', -> + expect(@video.speed).toEqual '0.75' + + it 'save setting for new speed', -> + expect($.cookie).toHaveBeenCalledWith 'video_speed', '0.75', expires: 3650, path: '/' + + describe 'when new speed is not available', -> + beforeEach -> + @video.setSpeed '1.75' + + it 'set speed to 1.0x', -> + expect(@video.speed).toEqual '1.0' + + describe 'HTML5', -> + beforeEach -> + loadFixtures 'videoalpha_html5.html' + @video = new VideoAlpha '#example', @videosDefinition + + describe 'when new speed is available', -> + beforeEach -> + @video.setSpeed '0.75' + + it 'set new speed', -> + expect(@video.speed).toEqual '0.75' + + it 'save setting for new speed', -> + expect($.cookie).toHaveBeenCalledWith 'video_speed', '0.75', expires: 3650, path: '/' + + describe 'when new speed is not available', -> + beforeEach -> + @video.setSpeed '1.75' + + it 'set speed to 1.0x', -> + expect(@video.speed).toEqual '1.0' + + describe 'getDuration', -> + beforeEach -> + loadFixtures 'videoalpha.html' + @video = new VideoAlpha '#example', @videosDefinition + + it 'return duration for current video', -> + expect(@video.getDuration()).toEqual 200 + + describe 'log', -> + beforeEach -> + loadFixtures 'videoalpha.html' + @video = new VideoAlpha '#example', @videosDefinition + spyOn Logger, 'log' + @video.log 'someEvent', { + currentTime: 25, + speed: '1.0' + } + + it 'call the logger with valid extra parameters', -> + expect(Logger.log).toHaveBeenCalledWith 'someEvent', + id: 'id' + code: @normalSpeedYoutubeId + currentTime: 25 + speed: '1.0' diff --git a/common/lib/xmodule/xmodule/js/src/video/display.coffee b/common/lib/xmodule/xmodule/js/src/video/display.coffee index aadafbc8d0..0393fe8b9c 100644 --- a/common/lib/xmodule/xmodule/js/src/video/display.coffee +++ b/common/lib/xmodule/xmodule/js/src/video/display.coffee @@ -5,7 +5,7 @@ class @Video @start = @el.data('start') @end = @el.data('end') @caption_asset_path = @el.data('caption-asset-path') - @show_captions = @el.data('show-captions') == "true" + @show_captions = @el.data('show-captions') window.player = null @el = $("#video_#{@id}") @parseVideos @el.data('streams') @@ -13,7 +13,7 @@ class @Video @parseSpeed() $("#video_#{@id}").data('video', this).addClass('video-load-complete') - @hide_captions = $.cookie('hide_captions') == 'true' + @hide_captions = $.cookie('hide_captions') == 'true' or (not @show_captions) if YT.Player @embed() diff --git a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_caption.coffee b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_caption.coffee index ae64194c55..317979bb4d 100644 --- a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_caption.coffee +++ b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_caption.coffee @@ -37,7 +37,7 @@ class @VideoCaptionAlpha extends SubviewAlpha @loaded = true if onTouchBasedDevice() - $('.subtitles li').html "Caption will be displayed when you start playing the video." + $('.subtitles').html "
  • Caption will be displayed when you start playing the video.
  • " else @renderCaption() @@ -140,12 +140,16 @@ class @VideoCaptionAlpha extends SubviewAlpha hideCaptions: (hide_captions) => if hide_captions + type = 'hide_transcript' @$('.hide-subtitles').attr('title', 'Turn on captions') @el.addClass('closed') else + type = 'show_transcript' @$('.hide-subtitles').attr('title', 'Turn off captions') @el.removeClass('closed') @scrollCaption() + @video.log type, + currentTime: @player.currentTime $.cookie('hide_captions', hide_captions, expires: 3650, path: '/') captionHeight: -> diff --git a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_player.coffee b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_player.coffee index 8d251cc1f8..7019386e04 100644 --- a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_player.coffee +++ b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_player.coffee @@ -6,7 +6,7 @@ class @VideoPlayerAlpha extends SubviewAlpha # we must pause the player (stop setInterval() method). if (window.OldVideoPlayerAlpha) and (window.OldVideoPlayerAlpha.onPause) window.OldVideoPlayerAlpha.onPause() - window.OldVideoPlayerAlpha = this + window.OldVideoPlayerAlpha = @ if @video.videoType is 'youtube' @PlayerState = YT.PlayerState @@ -29,7 +29,7 @@ class @VideoPlayerAlpha extends SubviewAlpha $(@progressSlider).bind('slide_seek', @onSeek) if @volumeControl $(@volumeControl).bind('volumeChange', @onVolumeChange) - $(document).keyup @bindExitFullScreen + $(document.documentElement).keyup @bindExitFullScreen @$('.add-fullscreen').click @toggleFullScreen @addToolTip() unless onTouchBasedDevice() @@ -45,6 +45,8 @@ class @VideoPlayerAlpha extends SubviewAlpha if @video.show_captions is true @caption = new VideoCaptionAlpha el: @el + video: @video + player: @ youtubeId: @video.youtubeId('1.0') currentSpeed: @currentSpeed() captionAssetPath: @video.caption_asset_path @@ -66,7 +68,19 @@ class @VideoPlayerAlpha extends SubviewAlpha if @video.end # work in AS3, not HMLT5. but iframe use AS3 @playerVars.end = @video.end + + # There is a bug which prevents YouTube API to correctly set the speed to 1.0 from another speed + # in Firefox when in HTML5 mode. There is a fix which basically reloads the video at speed 1.0 + # when this change is requested (instead of simply requesting a speed change to 1.0). This has to + # be done only when the video is being watched in Firefox. We need to figure out what browser is + # currently executing this code. + # + # TODO: Check the status of http://code.google.com/p/gdata-issues/issues/detail?id=4654 + # When the YouTube team fixes the API bug, we can remove this temporary bug fix. + @video.isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1 + if @video.videoType is 'html5' + @video.playerType = 'browser' @player = new HTML5Video.Player @video.el, playerVars: @playerVars, videoSources: @video.html5Sources, @@ -79,6 +93,7 @@ class @VideoPlayerAlpha extends SubviewAlpha youTubeId = @video.videos['1.0'] else youTubeId = @video.youtubeId() + @video.playerType = 'youtube' @player = new YT.Player @video.id, playerVars: @playerVars videoId: youTubeId @@ -99,7 +114,7 @@ class @VideoPlayerAlpha extends SubviewAlpha @video.log 'load_video' if @video.videoType is 'html5' @player.setPlaybackRate @video.speed - unless onTouchBasedDevice() + if not onTouchBasedDevice() and $('.video:first').data('autoplay') isnt 'False' $('.video-load-complete:first').data('video').player.play() onStateChange: (event) => @@ -235,13 +250,21 @@ class @VideoPlayerAlpha extends SubviewAlpha if @video.videoType is 'youtube' if @video.show_captions is true @caption.currentSpeed = newSpeed - if @video.videoType is 'html5' - @player.setPlaybackRate newSpeed - else if @video.videoType is 'youtube' + + # We request the reloading of the video in the case when YouTube is in Flash player mode, + # or when we are in Firefox, and the new speed is 1.0. The second case is necessary to + # avoid the bug where in Firefox speed switching to 1.0 in HTML5 player mode is handled + # incorrectly by YouTube API. + # + # TODO: Check the status of http://code.google.com/p/gdata-issues/issues/detail?id=4654 + # When the YouTube team fixes the API bug, we can remove this temporary bug fix. + if (@video.videoType is 'youtube') or ((@video.isFirefox) and (@video.playerType is 'youtube') and (newSpeed is '1.0')) if @isPlaying() @player.loadVideoById(@video.youtubeId(), @currentTime) else @player.cueVideoById(@video.youtubeId(), @currentTime) + else if @video.videoType is 'html5' + @player.setPlaybackRate newSpeed if @video.videoType is 'youtube' @updatePlayTime @currentTime @@ -262,11 +285,15 @@ class @VideoPlayerAlpha extends SubviewAlpha toggleFullScreen: (event) => event.preventDefault() if @el.hasClass('fullscreen') + type = 'not_fullscreen' @$('.add-fullscreen').attr('title', 'Fill browser') @el.removeClass('fullscreen') else + type = 'fullscreen' @el.addClass('fullscreen') @$('.add-fullscreen').attr('title', 'Exit fill browser') + @video.log type, + currentTime: @currentTime if @video.show_captions is true @caption.resize() @@ -281,7 +308,7 @@ class @VideoPlayerAlpha extends SubviewAlpha @player.pauseVideo() if @player.pauseVideo duration: -> - duration = @player.getDuration() + duration = @player.getDuration() if @player.getDuration if isFinite(duration) is false duration = @video.getDuration() duration diff --git a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_progress_slider.coffee b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_progress_slider.coffee index e9ed9923b0..5197c4938f 100644 --- a/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_progress_slider.coffee +++ b/common/lib/xmodule/xmodule/js/src/videoalpha/display/video_progress_slider.coffee @@ -12,7 +12,7 @@ class @VideoProgressSliderAlpha extends SubviewAlpha @buildHandle() buildHandle: -> - @handle = @$('.slider .ui-slider-handle') + @handle = @$('.ui-slider-handle') @handle.qtip content: "#{Time.format(@slider.slider('value'))}" position: @@ -43,7 +43,7 @@ class @VideoProgressSliderAlpha extends SubviewAlpha onStop: (event, ui) => @frozen = true - $(@).trigger('seek', ui.value) + $(@).trigger('slide_seek', ui.value) setTimeout (=> @frozen = false), 200 updateTooltip: (value)-> diff --git a/common/lib/xmodule/xmodule/mako_module.py b/common/lib/xmodule/xmodule/mako_module.py index 8abb1d7777..526fc1a0eb 100644 --- a/common/lib/xmodule/xmodule/mako_module.py +++ b/common/lib/xmodule/xmodule/mako_module.py @@ -18,14 +18,16 @@ class MakoModuleDescriptor(XModuleDescriptor): Expects the descriptor to have the `mako_template` attribute set with the name of the template to render, and it will pass the descriptor as the `module` parameter to that template + + MakoModuleDescriptor.__init__ takes the same arguments as xmodule.x_module:XModuleDescriptor.__init__ """ - def __init__(self, system, location, model_data): - if getattr(system, 'render_template', None) is None: - raise TypeError('{system} must have a render_template function' + def __init__(self, *args, **kwargs): + super(MakoModuleDescriptor, self).__init__(*args, **kwargs) + if getattr(self.runtime, 'render_template', None) is None: + raise TypeError('{runtime} must have a render_template function' ' in order to use a MakoDescriptor'.format( - system=system)) - super(MakoModuleDescriptor, self).__init__(system, location, model_data) + runtime=self.runtime)) def get_context(self): """ diff --git a/common/lib/xmodule/xmodule/modulestore/draft.py b/common/lib/xmodule/xmodule/modulestore/draft.py index c16c7403a9..048aea8867 100644 --- a/common/lib/xmodule/xmodule/modulestore/draft.py +++ b/common/lib/xmodule/xmodule/modulestore/draft.py @@ -4,6 +4,7 @@ from . import ModuleStoreBase, Location, namedtuple_to_son from .exceptions import ItemNotFoundError from .inheritance import own_metadata from xmodule.exceptions import InvalidVersionError +from pytz import UTC DRAFT = 'draft' # Things w/ these categories should never be marked as version='draft' @@ -197,7 +198,7 @@ class DraftModuleStore(ModuleStoreBase): """ draft = self.get_item(location) - draft.cms.published_date = datetime.utcnow() + draft.cms.published_date = datetime.now(UTC) draft.cms.published_by = published_by_id super(DraftModuleStore, self).update_item(location, draft._model_data._kvs._data) super(DraftModuleStore, self).update_children(location, draft._model_data._kvs._children) diff --git a/common/lib/xmodule/xmodule/modulestore/mongo.py b/common/lib/xmodule/xmodule/modulestore/mongo.py index be01328733..422abbdd73 100644 --- a/common/lib/xmodule/xmodule/modulestore/mongo.py +++ b/common/lib/xmodule/xmodule/modulestore/mongo.py @@ -37,15 +37,23 @@ def get_course_id_no_run(location): return "/".join([location.org, location.course]) +class InvalidWriteError(Exception): + """ + Raised to indicate that writing to a particular key + in the KeyValueStore is disabled + """ + + class MongoKeyValueStore(KeyValueStore): """ A KeyValueStore that maps keyed data access to one of the 3 data areas known to the MongoModuleStore (data, children, and metadata) """ - def __init__(self, data, children, metadata): + def __init__(self, data, children, metadata, location): self._data = data self._children = children self._metadata = metadata + self._location = location def get(self, key): if key.scope == Scope.children: @@ -55,7 +63,9 @@ class MongoKeyValueStore(KeyValueStore): elif key.scope == Scope.settings: return self._metadata[key.field_name] elif key.scope == Scope.content: - if key.field_name == 'data' and not isinstance(self._data, dict): + if key.field_name == 'location': + return self._location + elif key.field_name == 'data' and not isinstance(self._data, dict): return self._data else: return self._data[key.field_name] @@ -68,7 +78,9 @@ class MongoKeyValueStore(KeyValueStore): elif key.scope == Scope.settings: self._metadata[key.field_name] = value elif key.scope == Scope.content: - if key.field_name == 'data' and not isinstance(self._data, dict): + if key.field_name == 'location': + self._location = value + elif key.field_name == 'data' and not isinstance(self._data, dict): self._data = value else: self._data[key.field_name] = value @@ -82,7 +94,9 @@ class MongoKeyValueStore(KeyValueStore): if key.field_name in self._metadata: del self._metadata[key.field_name] elif key.scope == Scope.content: - if key.field_name == 'data' and not isinstance(self._data, dict): + if key.field_name == 'location': + self._location = Location(None) + elif key.field_name == 'data' and not isinstance(self._data, dict): self._data = None else: del self._data[key.field_name] @@ -95,7 +109,9 @@ class MongoKeyValueStore(KeyValueStore): elif key.scope == Scope.settings: return key.field_name in self._metadata elif key.scope == Scope.content: - if key.field_name == 'data' and not isinstance(self._data, dict): + if key.field_name == 'location': + return True + elif key.field_name == 'data' and not isinstance(self._data, dict): return True else: return key.field_name in self._data @@ -171,10 +187,11 @@ class CachingDescriptorSystem(MakoDescriptorSystem): definition.get('data', {}), definition.get('children', []), metadata, + location, ) model_data = DbModel(kvs, class_, None, MongoUsage(self.course_id, location)) - module = class_(self, location, model_data) + module = class_(self, model_data) if self.cached_metadata is not None: # parent container pointers don't differentiate between draft and non-draft # so when we do the lookup, we should do so with a non-draft location @@ -231,6 +248,7 @@ class MongoModuleStore(ModuleStoreBase): self.collection = pymongo.connection.Connection( host=host, port=port, + tz_aware=True, **kwargs )[db][collection] diff --git a/common/lib/xmodule/xmodule/modulestore/tests/factories.py b/common/lib/xmodule/xmodule/modulestore/tests/factories.py index 8cf148f742..99c5ec2c91 100644 --- a/common/lib/xmodule/xmodule/modulestore/tests/factories.py +++ b/common/lib/xmodule/xmodule/modulestore/tests/factories.py @@ -4,6 +4,11 @@ from uuid import uuid4 from xmodule.modulestore import Location from xmodule.modulestore.django import modulestore from xmodule.modulestore.inheritance import own_metadata +from xmodule.x_module import ModuleSystem +from mitxmako.shortcuts import render_to_string +from xblock.runtime import InvalidScopeError +import datetime +from pytz import UTC class XModuleCourseFactory(Factory): @@ -35,7 +40,7 @@ class XModuleCourseFactory(Factory): if display_name is not None: new_course.display_name = display_name - new_course.lms.start = gmtime() + new_course.lms.start = datetime.datetime.now(UTC) new_course.tabs = kwargs.get( 'tabs', [ @@ -159,3 +164,32 @@ class ItemFactory(XModuleItemFactory): @lazy_attribute_sequence def display_name(attr, n): return "{} {}".format(attr.category.title(), n) + + +def get_test_xmodule_for_descriptor(descriptor): + """ + Attempts to create an xmodule which responds usually correctly from the descriptor. Not guaranteed. + + :param descriptor: + """ + module_sys = ModuleSystem( + ajax_url='', + track_function=None, + get_module=None, + render_template=render_to_string, + replace_urls=None, + xblock_model_data=_test_xblock_model_data_accessor(descriptor) + ) + return descriptor.xmodule(module_sys) + +def _test_xblock_model_data_accessor(descriptor): + simple_map = {} + for field in descriptor.fields: + try: + simple_map[field.name] = getattr(descriptor, field.name) + except InvalidScopeError: + simple_map[field.name] = field.default + for field in descriptor.module_class.fields: + if field.name not in simple_map: + simple_map[field.name] = field.default + return lambda o: simple_map diff --git a/common/lib/xmodule/xmodule/modulestore/tests/test_mongo.py b/common/lib/xmodule/xmodule/modulestore/tests/test_mongo.py index 6332ade04f..07e6124537 100644 --- a/common/lib/xmodule/xmodule/modulestore/tests/test_mongo.py +++ b/common/lib/xmodule/xmodule/modulestore/tests/test_mongo.py @@ -1,11 +1,14 @@ import pymongo from mock import Mock -from nose.tools import assert_equals, assert_raises, assert_not_equals, with_setup, assert_false +from nose.tools import assert_equals, assert_raises, assert_not_equals, assert_false from pprint import pprint +from xblock.core import Scope +from xblock.runtime import KeyValueStore, InvalidScopeError + from xmodule.modulestore import Location -from xmodule.modulestore.mongo import MongoModuleStore +from xmodule.modulestore.mongo import MongoModuleStore, MongoKeyValueStore from xmodule.modulestore.xml_importer import import_from_xml from xmodule.templates import update_templates @@ -19,7 +22,7 @@ DB = 'test' COLLECTION = 'modulestore' FS_ROOT = DATA_DIR # TODO (vshnayder): will need a real fs_root for testing load_item DEFAULT_CLASS = 'xmodule.raw_module.RawDescriptor' -RENDER_TEMPLATE = lambda t_n, d, ctx=None, nsp='main': '' +RENDER_TEMPLATE = lambda t_n, d, ctx = None, nsp = 'main': '' class TestMongoModuleStore(object): @@ -42,7 +45,8 @@ class TestMongoModuleStore(object): @staticmethod def initdb(): # connect to the db - store = MongoModuleStore(HOST, DB, COLLECTION, FS_ROOT, RENDER_TEMPLATE, default_class=DEFAULT_CLASS) + store = MongoModuleStore(HOST, DB, COLLECTION, FS_ROOT, RENDER_TEMPLATE, + default_class=DEFAULT_CLASS) # Explicitly list the courses to load (don't want the big one) courses = ['toy', 'simple'] import_from_xml(store, DATA_DIR, courses) @@ -113,3 +117,75 @@ class TestMongoModuleStore(object): course.location.org == 'edx' and course.location.course == 'templates', '{0} is a template course'.format(course) ) + +class TestMongoKeyValueStore(object): + + def setUp(self): + self.data = {'foo': 'foo_value'} + self.location = Location('i4x://org/course/category/name@version') + self.children = ['i4x://org/course/child/a', 'i4x://org/course/child/b'] + self.metadata = {'meta': 'meta_val'} + self.kvs = MongoKeyValueStore(self.data, self.children, self.metadata, self.location) + + def _check_read(self, key, expected_value): + assert_equals(expected_value, self.kvs.get(key)) + assert self.kvs.has(key) + + def test_read(self): + assert_equals(self.data['foo'], self.kvs.get(KeyValueStore.Key(Scope.content, None, None, 'foo'))) + assert_equals(self.location, self.kvs.get(KeyValueStore.Key(Scope.content, None, None, 'location'))) + assert_equals(self.children, self.kvs.get(KeyValueStore.Key(Scope.children, None, None, 'children'))) + assert_equals(self.metadata['meta'], self.kvs.get(KeyValueStore.Key(Scope.settings, None, None, 'meta'))) + assert_equals(None, self.kvs.get(KeyValueStore.Key(Scope.parent, None, None, 'parent'))) + + def test_read_invalid_scope(self): + for scope in (Scope.preferences, Scope.user_info, Scope.user_state): + key = KeyValueStore.Key(scope, None, None, 'foo') + with assert_raises(InvalidScopeError): + self.kvs.get(key) + assert_false(self.kvs.has(key)) + + def test_read_non_dict_data(self): + self.kvs._data = 'xml_data' + assert_equals('xml_data', self.kvs.get(KeyValueStore.Key(Scope.content, None, None, 'data'))) + + def _check_write(self, key, value): + self.kvs.set(key, value) + assert_equals(value, self.kvs.get(key)) + + def test_write(self): + yield (self._check_write, KeyValueStore.Key(Scope.content, None, None, 'foo'), 'new_data') + yield (self._check_write, KeyValueStore.Key(Scope.content, None, None, 'location'), Location('i4x://org/course/category/name@new_version')) + yield (self._check_write, KeyValueStore.Key(Scope.children, None, None, 'children'), []) + yield (self._check_write, KeyValueStore.Key(Scope.settings, None, None, 'meta'), 'new_settings') + + def test_write_non_dict_data(self): + self.kvs._data = 'xml_data' + self._check_write(KeyValueStore.Key(Scope.content, None, None, 'data'), 'new_data') + + def test_write_invalid_scope(self): + for scope in (Scope.preferences, Scope.user_info, Scope.user_state, Scope.parent): + with assert_raises(InvalidScopeError): + self.kvs.set(KeyValueStore.Key(scope, None, None, 'foo'), 'new_value') + + def _check_delete_default(self, key, default_value): + self.kvs.delete(key) + assert_equals(default_value, self.kvs.get(key)) + assert self.kvs.has(key) + + def _check_delete_key_error(self, key): + self.kvs.delete(key) + with assert_raises(KeyError): + self.kvs.get(key) + assert_false(self.kvs.has(key)) + + def test_delete(self): + yield (self._check_delete_key_error, KeyValueStore.Key(Scope.content, None, None, 'foo')) + yield (self._check_delete_default, KeyValueStore.Key(Scope.content, None, None, 'location'), Location(None)) + yield (self._check_delete_default, KeyValueStore.Key(Scope.children, None, None, 'children'), []) + yield (self._check_delete_key_error, KeyValueStore.Key(Scope.settings, None, None, 'meta')) + + def test_delete_invalid_scope(self): + for scope in (Scope.preferences, Scope.user_info, Scope.user_state, Scope.parent): + with assert_raises(InvalidScopeError): + self.kvs.delete(KeyValueStore.Key(scope, None, None, 'foo')) diff --git a/common/lib/xmodule/xmodule/modulestore/xml.py b/common/lib/xmodule/xmodule/modulestore/xml.py index 4ea83d7e11..a704fc2ae8 100644 --- a/common/lib/xmodule/xmodule/modulestore/xml.py +++ b/common/lib/xmodule/xmodule/modulestore/xml.py @@ -52,7 +52,7 @@ class ImportSystem(XMLParsingSystem, MakoDescriptorSystem): xmlstore: the XMLModuleStore to store the loaded modules in """ - self.unnamed = defaultdict(int) # category -> num of new url_names for that category + self.unnamed = defaultdict(int) # category -> num of new url_names for that category self.used_names = defaultdict(set) # category -> set of used url_names self.org, self.course, self.url_name = course_id.split('/') # cdodge: adding the course_id as passed in for later reference rather than having to recomine the org/course/url_name @@ -124,7 +124,7 @@ class ImportSystem(XMLParsingSystem, MakoDescriptorSystem): else: # TODO (vshnayder): We may want to enable this once course repos are cleaned up. # (or we may want to give up on the requirement for non-state-relevant issues...) - #error_tracker("WARNING: no name specified for module. xml='{0}...'".format(xml[:100])) + # error_tracker("WARNING: no name specified for module. xml='{0}...'".format(xml[:100])) pass # Make sure everything is unique @@ -447,7 +447,7 @@ class XMLModuleStore(ModuleStoreBase): def load_extra_content(self, system, course_descriptor, category, base_dir, course_dir, url_name): self._load_extra_content(system, course_descriptor, category, base_dir, course_dir) - # then look in a override folder based on the course run + # then look in a override folder based on the course run if os.path.isdir(base_dir / url_name): self._load_extra_content(system, course_descriptor, category, base_dir / url_name, course_dir) @@ -463,7 +463,7 @@ class XMLModuleStore(ModuleStoreBase): # tabs are referenced in policy.json through a 'slug' which is just the filename without the .html suffix slug = os.path.splitext(os.path.basename(filepath))[0] loc = Location('i4x', course_descriptor.location.org, course_descriptor.location.course, category, slug) - module = HtmlDescriptor(system, loc, {'data': html}) + module = HtmlDescriptor(system, {'data': html, 'location': loc}) # VS[compat]: # Hack because we need to pull in the 'display_name' for static tabs (because we need to edit them) # from the course policy diff --git a/common/lib/xmodule/xmodule/open_ended_grading_classes/combined_open_ended_modulev1.py b/common/lib/xmodule/xmodule/open_ended_grading_classes/combined_open_ended_modulev1.py index e289ba72f1..45e73442d0 100644 --- a/common/lib/xmodule/xmodule/open_ended_grading_classes/combined_open_ended_modulev1.py +++ b/common/lib/xmodule/xmodule/open_ended_grading_classes/combined_open_ended_modulev1.py @@ -117,7 +117,6 @@ class CombinedOpenEndedV1Module(): self.instance_state = instance_state self.display_name = instance_state.get('display_name', "Open Ended") - self.rewrite_content_links = static_data.get('rewrite_content_links', "") #We need to set the location here so the child modules can use it system.set('location', location) @@ -354,17 +353,7 @@ class CombinedOpenEndedV1Module(): Output: Child task HTML """ self.update_task_states() - html = self.current_task.get_html(self.system) - return_html = html - try: - #Without try except block, get this error: - # File "/home/vik/mitx_all/mitx/common/lib/xmodule/xmodule/x_module.py", line 263, in rewrite_content_links - # if link.startswith(XASSET_SRCREF_PREFIX): - # Placing try except so that if the error is fixed, this code will start working again. - return_html = rewrite_links(html, self.rewrite_content_links) - except Exception: - pass - return return_html + return self.current_task.get_html(self.system) def get_current_attributes(self, task_number): """ @@ -823,7 +812,6 @@ class CombinedOpenEndedV1Descriptor(): module_class = CombinedOpenEndedV1Module filename_extension = "xml" - stores_state = True has_score = True template_dir_name = "combinedopenended" diff --git a/common/lib/xmodule/xmodule/open_ended_grading_classes/open_ended_module.py b/common/lib/xmodule/xmodule/open_ended_grading_classes/open_ended_module.py index 4f772fe0a1..24af7846d7 100644 --- a/common/lib/xmodule/xmodule/open_ended_grading_classes/open_ended_module.py +++ b/common/lib/xmodule/xmodule/open_ended_grading_classes/open_ended_module.py @@ -731,7 +731,6 @@ class OpenEndedDescriptor(): module_class = OpenEndedModule filename_extension = "xml" - stores_state = True has_score = True template_dir_name = "openended" diff --git a/common/lib/xmodule/xmodule/open_ended_grading_classes/openendedchild.py b/common/lib/xmodule/xmodule/open_ended_grading_classes/openendedchild.py index 7dc8d99451..b5d4e1b676 100644 --- a/common/lib/xmodule/xmodule/open_ended_grading_classes/openendedchild.py +++ b/common/lib/xmodule/xmodule/open_ended_grading_classes/openendedchild.py @@ -16,6 +16,7 @@ from .peer_grading_service import PeerGradingService, MockPeerGradingService import controller_query_service from datetime import datetime +from django.utils.timezone import UTC log = logging.getLogger("mitx.courseware") @@ -56,7 +57,7 @@ class OpenEndedChild(object): POST_ASSESSMENT = 'post_assessment' DONE = 'done' - #This is used to tell students where they are at in the module + # This is used to tell students where they are at in the module HUMAN_NAMES = { 'initial': 'Not started', 'assessing': 'In progress', @@ -102,7 +103,7 @@ class OpenEndedChild(object): if system.open_ended_grading_interface: self.peer_gs = PeerGradingService(system.open_ended_grading_interface, system) self.controller_qs = controller_query_service.ControllerQueryService( - system.open_ended_grading_interface,system + system.open_ended_grading_interface, system ) else: self.peer_gs = MockPeerGradingService() @@ -130,7 +131,7 @@ class OpenEndedChild(object): pass def closed(self): - if self.close_date is not None and datetime.utcnow() > self.close_date: + if self.close_date is not None and datetime.now(UTC()) > self.close_date: return True return False @@ -138,13 +139,13 @@ class OpenEndedChild(object): if self.closed(): return True, { 'success': False, - #This is a student_facing_error + # This is a student_facing_error 'error': 'The problem close date has passed, and this problem is now closed.' } elif self.child_attempts > self.max_attempts: return True, { 'success': False, - #This is a student_facing_error + # This is a student_facing_error 'error': 'You have attempted this problem {0} times. You are allowed {1} attempts.'.format( self.child_attempts, self.max_attempts ) @@ -272,7 +273,7 @@ class OpenEndedChild(object): try: return Progress(int(self.get_score()['score']), int(self._max_score)) except Exception as err: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Got bad progress from open ended child module. Max Score: {0}".format(self._max_score)) return None return None @@ -281,10 +282,10 @@ class OpenEndedChild(object): """ return dict out-of-sync error message, and also log. """ - #This is a dev_facing_error + # This is a dev_facing_error log.warning("Open ended child state out sync. state: %r, get: %r. %s", self.child_state, get, msg) - #This is a student_facing_error + # This is a student_facing_error return {'success': False, 'error': 'The problem state got out-of-sync. Please try reloading the page.'} @@ -391,7 +392,7 @@ class OpenEndedChild(object): """ overall_success = False if not self.accept_file_upload: - #If the question does not accept file uploads, do not do anything + # If the question does not accept file uploads, do not do anything return True, get_data has_file_to_upload, uploaded_to_s3, image_ok, image_tag = self.check_for_image_and_upload(get_data) @@ -399,19 +400,19 @@ class OpenEndedChild(object): get_data['student_answer'] += image_tag overall_success = True elif has_file_to_upload and not uploaded_to_s3 and image_ok: - #In this case, an image was submitted by the student, but the image could not be uploaded to S3. Likely - #a config issue (development vs deployment). For now, just treat this as a "success" + # In this case, an image was submitted by the student, but the image could not be uploaded to S3. Likely + # a config issue (development vs deployment). For now, just treat this as a "success" log.exception("Student AJAX post to combined open ended xmodule indicated that it contained an image, " "but the image was not able to be uploaded to S3. This could indicate a config" "issue with this deployment, but it could also indicate a problem with S3 or with the" "student image itself.") overall_success = True elif not has_file_to_upload: - #If there is no file to upload, probably the student has embedded the link in the answer text + # If there is no file to upload, probably the student has embedded the link in the answer text success, get_data['student_answer'] = self.check_for_url_in_text(get_data['student_answer']) overall_success = success - #log.debug("Has file: {0} Uploaded: {1} Image Ok: {2}".format(has_file_to_upload, uploaded_to_s3, image_ok)) + # log.debug("Has file: {0} Uploaded: {1} Image Ok: {2}".format(has_file_to_upload, uploaded_to_s3, image_ok)) return overall_success, get_data @@ -441,7 +442,7 @@ class OpenEndedChild(object): success = False allowed_to_submit = True response = {} - #This is a student_facing_error + # This is a student_facing_error error_string = ("You need to peer grade {0} more in order to make another submission. " "You have graded {1}, and {2} are required. You have made {3} successful peer grading submissions.") try: @@ -451,17 +452,17 @@ class OpenEndedChild(object): student_sub_count = response['student_sub_count'] success = True except: - #This is a dev_facing_error + # This is a dev_facing_error log.error("Could not contact external open ended graders for location {0} and student {1}".format( self.location_string, student_id)) - #This is a student_facing_error + # This is a student_facing_error error_message = "Could not contact the graders. Please notify course staff." return success, allowed_to_submit, error_message if count_graded >= count_required: return success, allowed_to_submit, "" else: allowed_to_submit = False - #This is a student_facing_error + # This is a student_facing_error error_message = error_string.format(count_required - count_graded, count_graded, count_required, student_sub_count) return success, allowed_to_submit, error_message diff --git a/common/lib/xmodule/xmodule/open_ended_grading_classes/self_assessment_module.py b/common/lib/xmodule/xmodule/open_ended_grading_classes/self_assessment_module.py index 497d2f6eed..5c46fbf095 100644 --- a/common/lib/xmodule/xmodule/open_ended_grading_classes/self_assessment_module.py +++ b/common/lib/xmodule/xmodule/open_ended_grading_classes/self_assessment_module.py @@ -286,7 +286,6 @@ class SelfAssessmentDescriptor(): module_class = SelfAssessmentModule filename_extension = "xml" - stores_state = True has_score = True template_dir_name = "selfassessment" diff --git a/common/lib/xmodule/xmodule/peer_grading_module.py b/common/lib/xmodule/xmodule/peer_grading_module.py index ccc3e31f51..a13fef8e40 100644 --- a/common/lib/xmodule/xmodule/peer_grading_module.py +++ b/common/lib/xmodule/xmodule/peer_grading_module.py @@ -10,17 +10,17 @@ from .x_module import XModule from xmodule.raw_module import RawDescriptor from xmodule.modulestore.django import modulestore from .timeinfo import TimeInfo -from xblock.core import Object, String, Scope -from xmodule.fields import Date, StringyFloat, StringyInteger, StringyBoolean +from xblock.core import Dict, String, Scope, Boolean, Integer, Float +from xmodule.fields import Date from xmodule.open_ended_grading_classes.peer_grading_service import PeerGradingService, GradingServiceError, MockPeerGradingService from open_ended_grading_classes import combined_open_ended_rubric +from django.utils.timezone import UTC log = logging.getLogger(__name__) USE_FOR_SINGLE_LOCATION = False LINK_TO_LOCATION = "" -TRUE_DICT = [True, "True", "true", "TRUE"] MAX_SCORE = 1 IS_GRADED = False @@ -28,7 +28,7 @@ EXTERNAL_GRADER_NO_CONTACT_ERROR = "Failed to contact external graders. Please class PeerGradingFields(object): - use_for_single_location = StringyBoolean( + use_for_single_location = Boolean( display_name="Show Single Problem", help='When True, only the single problem specified by "Link to Problem Location" is shown. ' 'When False, a panel is displayed with all problems available for peer grading.', @@ -39,22 +39,22 @@ class PeerGradingFields(object): help='The location of the problem being graded. Only used when "Show Single Problem" is True.', default=LINK_TO_LOCATION, scope=Scope.settings ) - is_graded = StringyBoolean( + is_graded = Boolean( display_name="Graded", help='Defines whether the student gets credit for grading this problem. Only used when "Show Single Problem" is True.', default=IS_GRADED, scope=Scope.settings ) due_date = Date(help="Due date that should be displayed.", default=None, scope=Scope.settings) grace_period_string = String(help="Amount of grace to give on the due date.", default=None, scope=Scope.settings) - max_grade = StringyInteger( + max_grade = Integer( help="The maximum grade that a student can receive for this problem.", default=MAX_SCORE, scope=Scope.settings, values={"min": 0} ) - student_data_for_location = Object( + student_data_for_location = Dict( help="Student data for a given peer grading problem.", scope=Scope.user_state ) - weight = StringyFloat( + weight = Float( display_name="Problem Weight", help="Defines the number of points each problem is worth. If the value is not set, each problem is worth one point.", scope=Scope.settings, values={"min": 0, "step": ".1"} @@ -62,6 +62,9 @@ class PeerGradingFields(object): class PeerGradingModule(PeerGradingFields, XModule): + """ + PeerGradingModule.__init__ takes the same arguments as xmodule.x_module:XModule.__init__ + """ _VERSION = 1 js = {'coffee': [resource_string(__name__, 'js/src/peergrading/peer_grading.coffee'), @@ -73,18 +76,17 @@ class PeerGradingModule(PeerGradingFields, XModule): css = {'scss': [resource_string(__name__, 'css/combinedopenended/display.scss')]} - def __init__(self, system, location, descriptor, model_data): - XModule.__init__(self, system, location, descriptor, model_data) + def __init__(self, *args, **kwargs): + super(PeerGradingModule, self).__init__(*args, **kwargs) #We need to set the location here so the child modules can use it - system.set('location', location) - self.system = system + self.runtime.set('location', self.location) if (self.system.open_ended_grading_interface): self.peer_gs = PeerGradingService(self.system.open_ended_grading_interface, self.system) else: self.peer_gs = MockPeerGradingService() - if self.use_for_single_location in TRUE_DICT: + if self.use_for_single_location: try: self.linked_problem = modulestore().get_instance(self.system.course_id, self.link_to_location) except: @@ -112,7 +114,7 @@ class PeerGradingModule(PeerGradingFields, XModule): if not self.ajax_url.endswith("/"): self.ajax_url = self.ajax_url + "/" - #StringyInteger could return None, so keep this check. + # Integer could return None, so keep this check. if not isinstance(self.max_grade, int): raise TypeError("max_grade needs to be an integer.") @@ -120,7 +122,7 @@ class PeerGradingModule(PeerGradingFields, XModule): return self._closed(self.timeinfo) def _closed(self, timeinfo): - if timeinfo.close_date is not None and datetime.utcnow() > timeinfo.close_date: + if timeinfo.close_date is not None and datetime.now(UTC()) > timeinfo.close_date: return True return False @@ -146,7 +148,7 @@ class PeerGradingModule(PeerGradingFields, XModule): """ if self.closed(): return self.peer_grading_closed() - if self.use_for_single_location not in TRUE_DICT: + if not self.use_for_single_location: return self.peer_grading() else: return self.peer_grading_problem({'location': self.link_to_location})['html'] @@ -166,9 +168,9 @@ class PeerGradingModule(PeerGradingFields, XModule): } if dispatch not in handlers: - #This is a dev_facing_error + # This is a dev_facing_error log.error("Cannot find {0} in handlers in handle_ajax function for open_ended_module.py".format(dispatch)) - #This is a dev_facing_error + # This is a dev_facing_error return json.dumps({'error': 'Error handling action. Please try again.', 'success': False}) d = handlers[dispatch](get) @@ -187,7 +189,7 @@ class PeerGradingModule(PeerGradingFields, XModule): count_required = response['count_required'] success = True except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Error getting location data from controller for location {0}, student {1}" .format(location, student_id)) @@ -203,7 +205,7 @@ class PeerGradingModule(PeerGradingFields, XModule): 'score': score, 'total': max_score, } - if self.use_for_single_location not in TRUE_DICT or self.is_graded not in TRUE_DICT: + if not self.use_for_single_location or not self.is_graded: return score_dict try: @@ -220,7 +222,7 @@ class PeerGradingModule(PeerGradingFields, XModule): count_graded = response['count_graded'] count_required = response['count_required'] if count_required > 0 and count_graded >= count_required: - #Ensures that once a student receives a final score for peer grading, that it does not change. + # Ensures that once a student receives a final score for peer grading, that it does not change. self.student_data_for_location = response if self.weight is not None: @@ -238,7 +240,7 @@ class PeerGradingModule(PeerGradingFields, XModule): randomization, and 5/7 on another ''' max_grade = None - if self.use_for_single_location in TRUE_DICT and self.is_graded in TRUE_DICT: + if self.use_for_single_location and self.is_graded: max_grade = self.max_grade return max_grade @@ -271,10 +273,10 @@ class PeerGradingModule(PeerGradingFields, XModule): response = self.peer_gs.get_next_submission(location, grader_id) return response except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Error getting next submission. server url: {0} location: {1}, grader_id: {2}" .format(self.peer_gs.url, location, grader_id)) - #This is a student_facing_error + # This is a student_facing_error return {'success': False, 'error': EXTERNAL_GRADER_NO_CONTACT_ERROR} @@ -314,13 +316,13 @@ class PeerGradingModule(PeerGradingFields, XModule): score, feedback, submission_key, rubric_scores, submission_flagged) return response except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("""Error saving grade to open ended grading service. server url: {0}, location: {1}, submission_id:{2}, submission_key: {3}, score: {4}""" .format(self.peer_gs.url, location, submission_id, submission_key, score) ) - #This is a student_facing_error + # This is a student_facing_error return { 'success': False, 'error': EXTERNAL_GRADER_NO_CONTACT_ERROR @@ -356,10 +358,10 @@ class PeerGradingModule(PeerGradingFields, XModule): response = self.peer_gs.is_student_calibrated(location, grader_id) return response except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Error from open ended grading service. server url: {0}, grader_id: {0}, location: {1}" .format(self.peer_gs.url, grader_id, location)) - #This is a student_facing_error + # This is a student_facing_error return { 'success': False, 'error': EXTERNAL_GRADER_NO_CONTACT_ERROR @@ -401,17 +403,17 @@ class PeerGradingModule(PeerGradingFields, XModule): response = self.peer_gs.show_calibration_essay(location, grader_id) return response except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Error from open ended grading service. server url: {0}, location: {0}" .format(self.peer_gs.url, location)) - #This is a student_facing_error + # This is a student_facing_error return {'success': False, 'error': EXTERNAL_GRADER_NO_CONTACT_ERROR} # if we can't parse the rubric into HTML, except etree.XMLSyntaxError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception("Cannot parse rubric string.") - #This is a student_facing_error + # This is a student_facing_error return {'success': False, 'error': 'Error displaying submission. Please notify course staff.'} @@ -455,11 +457,11 @@ class PeerGradingModule(PeerGradingFields, XModule): response['actual_rubric'] = rubric_renderer.render_rubric(response['actual_rubric'])['html'] return response except GradingServiceError: - #This is a dev_facing_error + # This is a dev_facing_error log.exception( "Error saving calibration grade, location: {0}, submission_key: {1}, grader_id: {2}".format( location, submission_key, grader_id)) - #This is a student_facing_error + # This is a student_facing_error return self._err_response('There was an error saving your score. Please notify course staff.') def peer_grading_closed(self): @@ -491,13 +493,13 @@ class PeerGradingModule(PeerGradingFields, XModule): problem_list = problem_list_dict['problem_list'] except GradingServiceError: - #This is a student_facing_error + # This is a student_facing_error error_text = EXTERNAL_GRADER_NO_CONTACT_ERROR log.error(error_text) success = False # catch error if if the json loads fails except ValueError: - #This is a student_facing_error + # This is a student_facing_error error_text = "Could not get list of problems to peer grade. Please notify course staff." log.error(error_text) success = False @@ -556,9 +558,9 @@ class PeerGradingModule(PeerGradingFields, XModule): Show individual problem interface ''' if get is None or get.get('location') is None: - if self.use_for_single_location not in TRUE_DICT: - #This is an error case, because it must be set to use a single location to be called without get parameters - #This is a dev_facing_error + if not self.use_for_single_location: + # This is an error case, because it must be set to use a single location to be called without get parameters + # This is a dev_facing_error log.error( "Peer grading problem in peer_grading_module called with no get parameters, but use_for_single_location is False.") return {'html': "", 'success': False} @@ -602,7 +604,6 @@ class PeerGradingDescriptor(PeerGradingFields, RawDescriptor): module_class = PeerGradingModule filename_extension = "xml" - stores_state = True has_score = True always_recalculate_grades = True template_dir_name = "peer_grading" diff --git a/common/lib/xmodule/xmodule/poll_module.py b/common/lib/xmodule/xmodule/poll_module.py index dafcef9835..9f2359865a 100644 --- a/common/lib/xmodule/xmodule/poll_module.py +++ b/common/lib/xmodule/xmodule/poll_module.py @@ -19,7 +19,7 @@ from xmodule.x_module import XModule from xmodule.stringify import stringify_children from xmodule.mako_module import MakoModuleDescriptor from xmodule.xml_module import XmlDescriptor -from xblock.core import Scope, String, Object, Boolean, List +from xblock.core import Scope, String, Dict, Boolean, List log = logging.getLogger(__name__) @@ -30,7 +30,7 @@ class PollFields(object): voted = Boolean(help="Whether this student has voted on the poll", scope=Scope.user_state, default=False) poll_answer = String(help="Student answer", scope=Scope.user_state, default='') - poll_answers = Object(help="All possible answers for the poll fro other students", scope=Scope.content) + poll_answers = Dict(help="All possible answers for the poll fro other students", scope=Scope.content) answers = List(help="Poll answers from xml", scope=Scope.content, default=[]) question = String(help="Poll question", scope=Scope.content, default='') @@ -141,7 +141,6 @@ class PollDescriptor(PollFields, MakoModuleDescriptor, XmlDescriptor): module_class = PollModule template_dir_name = 'poll' - stores_state = True @classmethod def definition_from_xml(cls, xml_object, system): diff --git a/common/lib/xmodule/xmodule/randomize_module.py b/common/lib/xmodule/xmodule/randomize_module.py index 434706530b..04b9a6e215 100644 --- a/common/lib/xmodule/xmodule/randomize_module.py +++ b/common/lib/xmodule/xmodule/randomize_module.py @@ -94,7 +94,6 @@ class RandomizeDescriptor(RandomizeFields, SequenceDescriptor): filename_extension = "xml" - stores_state = True def definition_to_xml(self, resource_fs): diff --git a/common/lib/xmodule/xmodule/seq_module.py b/common/lib/xmodule/xmodule/seq_module.py index f6c3133ede..f0dfca3be6 100644 --- a/common/lib/xmodule/xmodule/seq_module.py +++ b/common/lib/xmodule/xmodule/seq_module.py @@ -121,8 +121,6 @@ class SequenceDescriptor(SequenceFields, MakoModuleDescriptor, XmlDescriptor): mako_template = 'widgets/sequence-edit.html' module_class = SequenceModule - stores_state = True # For remembering where in the sequence the student is - js = {'coffee': [resource_string(__name__, 'js/src/sequence/edit.coffee')]} js_module_name = "SequenceDescriptor" diff --git a/common/lib/xmodule/xmodule/static_content.py b/common/lib/xmodule/xmodule/static_content.py index 8929ddebd4..4c4827e0aa 100755 --- a/common/lib/xmodule/xmodule/static_content.py +++ b/common/lib/xmodule/xmodule/static_content.py @@ -129,7 +129,7 @@ def _write_js(output_root, classes): def _write_files(output_root, contents): _ensure_dir(output_root) for extra_file in set(output_root.files()) - set(contents.keys()): - extra_file.remove() + extra_file.remove_p() for filename, file_content in contents.iteritems(): (output_root / filename).write_bytes(file_content) diff --git a/common/lib/xmodule/xmodule/templates/videoalpha/default.yaml b/common/lib/xmodule/xmodule/templates/videoalpha/default.yaml index dba8bbd0b4..1c25b272a3 100644 --- a/common/lib/xmodule/xmodule/templates/videoalpha/default.yaml +++ b/common/lib/xmodule/xmodule/templates/videoalpha/default.yaml @@ -1,6 +1,6 @@ --- metadata: - display_name: Video Alpha 1 + display_name: Video Alpha version: 1 data: | diff --git a/common/lib/xmodule/xmodule/tests/test_annotatable_module.py b/common/lib/xmodule/xmodule/tests/test_annotatable_module.py index 43eae8e43e..268f743421 100644 --- a/common/lib/xmodule/xmodule/tests/test_annotatable_module.py +++ b/common/lib/xmodule/xmodule/tests/test_annotatable_module.py @@ -29,10 +29,10 @@ class AnnotatableModuleTestCase(unittest.TestCase): ''' descriptor = Mock() - module_data = {'data': sample_xml} + module_data = {'data': sample_xml, 'location': location} def setUp(self): - self.annotatable = AnnotatableModule(test_system(), self.location, self.descriptor, self.module_data) + self.annotatable = AnnotatableModule(test_system(), self.descriptor, self.module_data) def test_annotation_data_attr(self): el = etree.fromstring('test') diff --git a/common/lib/xmodule/xmodule/tests/test_capa_module.py b/common/lib/xmodule/xmodule/tests/test_capa_module.py index 61de21b129..deb6f13e20 100644 --- a/common/lib/xmodule/xmodule/tests/test_capa_module.py +++ b/common/lib/xmodule/xmodule/tests/test_capa_module.py @@ -18,6 +18,8 @@ from xmodule.modulestore import Location from django.http import QueryDict from . import test_system +from pytz import UTC +from capa.correctmap import CorrectMap class CapaFactory(object): @@ -85,7 +87,7 @@ class CapaFactory(object): """ location = Location(["i4x", "edX", "capa_test", "problem", "SampleProblem{0}".format(CapaFactory.next_num())]) - model_data = {'data': CapaFactory.sample_problem_xml} + model_data = {'data': CapaFactory.sample_problem_xml, 'location': location} if graceperiod is not None: model_data['graceperiod'] = graceperiod @@ -112,7 +114,7 @@ class CapaFactory(object): system = test_system() system.render_template = Mock(return_value="
    Test Template HTML
    ") - module = CapaModule(system, location, descriptor, model_data) + module = CapaModule(system, descriptor, model_data) if correct: # TODO: probably better to actually set the internal state properly, but... @@ -126,7 +128,7 @@ class CapaFactory(object): class CapaModuleTest(unittest.TestCase): def setUp(self): - now = datetime.datetime.now() + now = datetime.datetime.now(UTC) day_delta = datetime.timedelta(days=1) self.yesterday_str = str(now - day_delta) self.today_str = str(now) @@ -475,12 +477,12 @@ class CapaModuleTest(unittest.TestCase): # Simulate that the problem is queued with patch('capa.capa_problem.LoncapaProblem.is_queued') \ - as mock_is_queued,\ + as mock_is_queued, \ patch('capa.capa_problem.LoncapaProblem.get_recentmost_queuetime') \ as mock_get_queuetime: mock_is_queued.return_value = True - mock_get_queuetime.return_value = datetime.datetime.now() + mock_get_queuetime.return_value = datetime.datetime.now(UTC) get_request_dict = {CapaFactory.input_key(): '3.14'} result = module.check_problem(get_request_dict) @@ -596,6 +598,85 @@ class CapaModuleTest(unittest.TestCase): # Expect that the problem was NOT reset self.assertTrue('success' in result and not result['success']) + def test_rescore_problem_correct(self): + + module = CapaFactory.create(attempts=1, done=True) + + # Simulate that all answers are marked correct, no matter + # what the input is, by patching LoncapaResponse.evaluate_answers() + with patch('capa.responsetypes.LoncapaResponse.evaluate_answers') as mock_evaluate_answers: + mock_evaluate_answers.return_value = CorrectMap(CapaFactory.answer_key(), 'correct') + result = module.rescore_problem() + + # Expect that the problem is marked correct + self.assertEqual(result['success'], 'correct') + + # Expect that we get no HTML + self.assertFalse('contents' in result) + + # Expect that the number of attempts is not incremented + self.assertEqual(module.attempts, 1) + + def test_rescore_problem_incorrect(self): + # make sure it also works when attempts have been reset, + # so add this to the test: + module = CapaFactory.create(attempts=0, done=True) + + # Simulate that all answers are marked incorrect, no matter + # what the input is, by patching LoncapaResponse.evaluate_answers() + with patch('capa.responsetypes.LoncapaResponse.evaluate_answers') as mock_evaluate_answers: + mock_evaluate_answers.return_value = CorrectMap(CapaFactory.answer_key(), 'incorrect') + result = module.rescore_problem() + + # Expect that the problem is marked incorrect + self.assertEqual(result['success'], 'incorrect') + + # Expect that the number of attempts is not incremented + self.assertEqual(module.attempts, 0) + + def test_rescore_problem_not_done(self): + # Simulate that the problem is NOT done + module = CapaFactory.create(done=False) + + # Try to rescore the problem, and get exception + with self.assertRaises(xmodule.exceptions.NotFoundError): + module.rescore_problem() + + def test_rescore_problem_not_supported(self): + module = CapaFactory.create(done=True) + + # Try to rescore the problem, and get exception + with patch('capa.capa_problem.LoncapaProblem.supports_rescoring') as mock_supports_rescoring: + mock_supports_rescoring.return_value = False + with self.assertRaises(NotImplementedError): + module.rescore_problem() + + def _rescore_problem_error_helper(self, exception_class): + """Helper to allow testing all errors that rescoring might return.""" + # Create the module + module = CapaFactory.create(attempts=1, done=True) + + # Simulate answering a problem that raises the exception + with patch('capa.capa_problem.LoncapaProblem.rescore_existing_answers') as mock_rescore: + mock_rescore.side_effect = exception_class(u'test error \u03a9') + result = module.rescore_problem() + + # Expect an AJAX alert message in 'success' + expected_msg = u'Error: test error \u03a9' + self.assertEqual(result['success'], expected_msg) + + # Expect that the number of attempts is NOT incremented + self.assertEqual(module.attempts, 1) + + def test_rescore_problem_student_input_error(self): + self._rescore_problem_error_helper(StudentInputError) + + def test_rescore_problem_problem_error(self): + self._rescore_problem_error_helper(LoncapaProblemError) + + def test_rescore_problem_response_error(self): + self._rescore_problem_error_helper(ResponseError) + def test_save_problem(self): module = CapaFactory.create(done=False) diff --git a/common/lib/xmodule/xmodule/tests/test_combined_open_ended.py b/common/lib/xmodule/xmodule/tests/test_combined_open_ended.py index 409347882f..7dbaf6fe3d 100644 --- a/common/lib/xmodule/xmodule/tests/test_combined_open_ended.py +++ b/common/lib/xmodule/xmodule/tests/test_combined_open_ended.py @@ -175,7 +175,6 @@ class OpenEndedModuleTest(unittest.TestCase): 'max_score': max_score, 'display_name': 'Name', 'accept_file_upload': False, - 'rewrite_content_links': "", 'close_date': None, 's3_interface': test_util_open_ended.S3_INTERFACE, 'open_ended_grading_interface': test_util_open_ended.OPEN_ENDED_GRADING_INTERFACE, @@ -332,7 +331,6 @@ class CombinedOpenEndedModuleTest(unittest.TestCase): 'max_score': max_score, 'display_name': 'Name', 'accept_file_upload': False, - 'rewrite_content_links': "", 'close_date': "", 's3_interface': test_util_open_ended.S3_INTERFACE, 'open_ended_grading_interface': test_util_open_ended.OPEN_ENDED_GRADING_INTERFACE, @@ -370,10 +368,15 @@ class CombinedOpenEndedModuleTest(unittest.TestCase): full_definition = definition_template.format(prompt=prompt, rubric=rubric, task1=task_xml1, task2=task_xml2) descriptor = Mock(data=full_definition) test_system = test_system() - combinedoe_container = CombinedOpenEndedModule(test_system, - location, - descriptor, - model_data={'data': full_definition, 'weight': '1'}) + combinedoe_container = CombinedOpenEndedModule( + test_system, + descriptor, + model_data={ + 'data': full_definition, + 'weight': '1', + 'location': location + } + ) def setUp(self): # TODO: this constructor call is definitely wrong, but neither branch diff --git a/common/lib/xmodule/xmodule/tests/test_conditional.py b/common/lib/xmodule/xmodule/tests/test_conditional.py index 320b94efb7..fed40b690f 100644 --- a/common/lib/xmodule/xmodule/tests/test_conditional.py +++ b/common/lib/xmodule/xmodule/tests/test_conditional.py @@ -20,7 +20,7 @@ from . import test_system class DummySystem(ImportSystem): - @patch('xmodule.modulestore.xml.OSFS', lambda dir: MemoryFS()) + @patch('xmodule.modulestore.xml.OSFS', lambda directory: MemoryFS()) def __init__(self, load_error_modules): xmlstore = XMLModuleStore("data_dir", course_dirs=[], load_error_modules=load_error_modules) @@ -41,7 +41,8 @@ class DummySystem(ImportSystem): ) def render_template(self, template, context): - raise Exception("Shouldn't be called") + raise Exception("Shouldn't be called") + class ConditionalFactory(object): """ @@ -60,9 +61,9 @@ class ConditionalFactory(object): source_location = Location(["i4x", "edX", "conditional_test", "problem", "SampleProblem"]) if source_is_error_module: # Make an error descriptor and module - source_descriptor = NonStaffErrorDescriptor.from_xml('some random xml data', + source_descriptor = NonStaffErrorDescriptor.from_xml('some random xml data', system, - org=source_location.org, + org=source_location.org, course=source_location.course, error_msg='random error message') source_module = source_descriptor.xmodule(system) @@ -87,13 +88,13 @@ class ConditionalFactory(object): # construct conditional module: cond_location = Location(["i4x", "edX", "conditional_test", "conditional", "SampleConditional"]) - model_data = {'data': ''} - cond_module = ConditionalModule(system, cond_location, cond_descriptor, model_data) + model_data = {'data': '', 'location': cond_location} + cond_module = ConditionalModule(system, cond_descriptor, model_data) # return dict: return {'cond_module': cond_module, 'source_module': source_module, - 'child_module': child_module } + 'child_module': child_module} class ConditionalModuleBasicTest(unittest.TestCase): @@ -106,15 +107,14 @@ class ConditionalModuleBasicTest(unittest.TestCase): self.test_system = test_system() def test_icon_class(self): - '''verify that get_icon_class works independent of condition satisfaction''' + '''verify that get_icon_class works independent of condition satisfaction''' modules = ConditionalFactory.create(self.test_system) for attempted in ["false", "true"]: - for icon_class in [ 'other', 'problem', 'video']: + for icon_class in ['other', 'problem', 'video']: modules['source_module'].is_attempted = attempted modules['child_module'].get_icon_class = lambda: icon_class self.assertEqual(modules['cond_module'].get_icon_class(), icon_class) - def test_get_html(self): modules = ConditionalFactory.create(self.test_system) # because test_system returns the repr of the context dict passed to render_template, @@ -186,7 +186,6 @@ class ConditionalModuleXmlTest(unittest.TestCase): if isinstance(descriptor, Location): location = descriptor descriptor = self.modulestore.get_instance(course.id, location, depth=None) - location = descriptor.location return descriptor.xmodule(self.test_system) # edx - HarvardX @@ -225,4 +224,3 @@ class ConditionalModuleXmlTest(unittest.TestCase): print "post-attempt ajax: ", ajax html = ajax['html'] self.assertTrue(any(['This is a secret' in item for item in html])) - diff --git a/common/lib/xmodule/xmodule/tests/test_course_module.py b/common/lib/xmodule/xmodule/tests/test_course_module.py index 0d789964e9..53181b5a28 100644 --- a/common/lib/xmodule/xmodule/tests/test_course_module.py +++ b/common/lib/xmodule/xmodule/tests/test_course_module.py @@ -1,5 +1,4 @@ import unittest -from time import strptime import datetime from fs.memoryfs import MemoryFS @@ -8,13 +7,13 @@ from mock import Mock, patch from xmodule.modulestore.xml import ImportSystem, XMLModuleStore import xmodule.course_module -from xmodule.util.date_utils import time_to_datetime +from django.utils.timezone import UTC ORG = 'test_org' COURSE = 'test_course' -NOW = strptime('2013-01-01T01:00:00', '%Y-%m-%dT%H:%M:00') +NOW = datetime.datetime.strptime('2013-01-01T01:00:00', '%Y-%m-%dT%H:%M:00').replace(tzinfo=UTC()) class DummySystem(ImportSystem): @@ -81,10 +80,10 @@ class IsNewCourseTestCase(unittest.TestCase): Mock(wraps=datetime.datetime) ) mocked_datetime = datetime_patcher.start() - mocked_datetime.utcnow.return_value = time_to_datetime(NOW) + mocked_datetime.now.return_value = NOW self.addCleanup(datetime_patcher.stop) - @patch('xmodule.course_module.time.gmtime') + @patch('xmodule.course_module.datetime.now') def test_sorting_score(self, gmtime_mock): gmtime_mock.return_value = NOW @@ -125,7 +124,7 @@ class IsNewCourseTestCase(unittest.TestCase): print "Comparing %s to %s" % (a, b) assertion(a_score, b_score) - @patch('xmodule.course_module.time.gmtime') + @patch('xmodule.course_module.datetime.now') def test_start_date_text(self, gmtime_mock): gmtime_mock.return_value = NOW diff --git a/common/lib/xmodule/xmodule/tests/test_date_utils.py b/common/lib/xmodule/xmodule/tests/test_date_utils.py index af96de018f..d051a7c431 100644 --- a/common/lib/xmodule/xmodule/tests/test_date_utils.py +++ b/common/lib/xmodule/xmodule/tests/test_date_utils.py @@ -3,19 +3,12 @@ from nose.tools import assert_equals from xmodule.util import date_utils import datetime -import time - - -def test_get_time_struct_display(): - assert_equals("", date_utils.get_time_struct_display(None, "")) - test_time = time.struct_time((1992, 3, 12, 15, 3, 30, 1, 71, 0)) - assert_equals("03/12/1992", date_utils.get_time_struct_display(test_time, '%m/%d/%Y')) - assert_equals("15:03", date_utils.get_time_struct_display(test_time, '%H:%M')) +from pytz import UTC def test_get_default_time_display(): assert_equals("", date_utils.get_default_time_display(None)) - test_time = time.struct_time((1992, 3, 12, 15, 3, 30, 1, 71, 0)) + test_time = datetime.datetime(1992, 3, 12, 15, 3, 30, tzinfo=UTC) assert_equals( "Mar 12, 1992 at 15:03 UTC", date_utils.get_default_time_display(test_time)) @@ -26,10 +19,36 @@ def test_get_default_time_display(): "Mar 12, 1992 at 15:03", date_utils.get_default_time_display(test_time, False)) - -def test_time_to_datetime(): - assert_equals(None, date_utils.time_to_datetime(None)) - test_time = time.struct_time((1992, 3, 12, 15, 3, 30, 1, 71, 0)) +def test_get_default_time_display_notz(): + test_time = datetime.datetime(1992, 3, 12, 15, 3, 30) assert_equals( - datetime.datetime(1992, 3, 12, 15, 3, 30), - date_utils.time_to_datetime(test_time)) + "Mar 12, 1992 at 15:03 UTC", + date_utils.get_default_time_display(test_time)) + assert_equals( + "Mar 12, 1992 at 15:03 UTC", + date_utils.get_default_time_display(test_time, True)) + assert_equals( + "Mar 12, 1992 at 15:03", + date_utils.get_default_time_display(test_time, False)) + +# pylint: disable=W0232 +class NamelessTZ(datetime.tzinfo): + + def utcoffset(self, _dt): + return datetime.timedelta(hours=-3) + + def dst(self, _dt): + return datetime.timedelta(0) + +def test_get_default_time_display_no_tzname(): + assert_equals("", date_utils.get_default_time_display(None)) + test_time = datetime.datetime(1992, 3, 12, 15, 3, 30, tzinfo=NamelessTZ()) + assert_equals( + "Mar 12, 1992 at 15:03-0300", + date_utils.get_default_time_display(test_time)) + assert_equals( + "Mar 12, 1992 at 15:03-0300", + date_utils.get_default_time_display(test_time, True)) + assert_equals( + "Mar 12, 1992 at 15:03", + date_utils.get_default_time_display(test_time, False)) diff --git a/common/lib/xmodule/xmodule/tests/test_fields.py b/common/lib/xmodule/xmodule/tests/test_fields.py index 9642f7c595..f0eb082dcf 100644 --- a/common/lib/xmodule/xmodule/tests/test_fields.py +++ b/common/lib/xmodule/xmodule/tests/test_fields.py @@ -2,23 +2,17 @@ import datetime import unittest from django.utils.timezone import UTC -from xmodule.fields import Date, StringyFloat, StringyInteger, StringyBoolean +from xmodule.fields import Date, Timedelta +from xmodule.timeinfo import TimeInfo import time + class DateTest(unittest.TestCase): date = Date() - @staticmethod - def struct_to_datetime(struct_time): - return datetime.datetime(struct_time.tm_year, struct_time.tm_mon, - struct_time.tm_mday, struct_time.tm_hour, - struct_time.tm_min, struct_time.tm_sec, tzinfo=UTC()) - - def compare_dates(self, date1, date2, expected_delta): - dt1 = DateTest.struct_to_datetime(date1) - dt2 = DateTest.struct_to_datetime(date2) - self.assertEqual(dt1 - dt2, expected_delta, str(date1) + "-" - + str(date2) + "!=" + str(expected_delta)) + def compare_dates(self, dt1, dt2, expected_delta): + self.assertEqual(dt1 - dt2, expected_delta, str(dt1) + "-" + + str(dt2) + "!=" + str(expected_delta)) def test_from_json(self): '''Test conversion from iso compatible date strings to struct_time''' @@ -55,11 +49,23 @@ class DateTest(unittest.TestCase): def test_old_due_date_format(self): current = datetime.datetime.today() self.assertEqual( - time.struct_time((current.year, 3, 12, 12, 0, 0, 1, 71, 0)), + datetime.datetime(current.year, 3, 12, 12, tzinfo=UTC()), DateTest.date.from_json("March 12 12:00")) self.assertEqual( - time.struct_time((current.year, 12, 4, 16, 30, 0, 2, 338, 0)), + datetime.datetime(current.year, 12, 4, 16, 30, tzinfo=UTC()), DateTest.date.from_json("December 4 16:30")) + self.assertIsNone(DateTest.date.from_json("12 12:00")) + + def test_non_std_from_json(self): + """ + Test the non-standard args being passed to from_json + """ + now = datetime.datetime.now(UTC()) + delta = now - datetime.datetime.fromtimestamp(0, UTC()) + self.assertEqual(DateTest.date.from_json(delta.total_seconds() * 1000), + now) + yesterday = datetime.datetime.now(UTC()) - datetime.timedelta(days=-1) + self.assertEqual(DateTest.date.from_json(yesterday), yesterday) def test_to_json(self): ''' @@ -67,7 +73,7 @@ class DateTest(unittest.TestCase): ''' self.assertEqual( DateTest.date.to_json( - time.strptime("2012-12-31T23:59:59Z", "%Y-%m-%dT%H:%M:%SZ")), + datetime.datetime.strptime("2012-12-31T23:59:59Z", "%Y-%m-%dT%H:%M:%SZ")), "2012-12-31T23:59:59Z") self.assertEqual( DateTest.date.to_json( @@ -76,57 +82,34 @@ class DateTest(unittest.TestCase): self.assertEqual( DateTest.date.to_json( DateTest.date.from_json("2012-12-31T23:00:01-01:00")), - "2013-01-01T00:00:01Z") + "2012-12-31T23:00:01-01:00") -class StringyIntegerTest(unittest.TestCase): - def assertEquals(self, expected, arg): - self.assertEqual(expected, StringyInteger().from_json(arg)) +class TimedeltaTest(unittest.TestCase): + delta = Timedelta() - def test_integer(self): - self.assertEquals(5, '5') - self.assertEquals(0, '0') - self.assertEquals(-1023, '-1023') + def test_from_json(self): + self.assertEqual( + TimedeltaTest.delta.from_json('1 day 12 hours 59 minutes 59 seconds'), + datetime.timedelta(days=1, hours=12, minutes=59, seconds=59) + ) - def test_none(self): - self.assertEquals(None, None) - self.assertEquals(None, 'abc') - self.assertEquals(None, '[1]') - self.assertEquals(None, '1.023') + self.assertEqual( + TimedeltaTest.delta.from_json('1 day 46799 seconds'), + datetime.timedelta(days=1, seconds=46799) + ) + + def test_to_json(self): + self.assertEqual( + '1 days 46799 seconds', + TimedeltaTest.delta.to_json(datetime.timedelta(days=1, hours=12, minutes=59, seconds=59)) + ) -class StringyFloatTest(unittest.TestCase): - - def assertEquals(self, expected, arg): - self.assertEqual(expected, StringyFloat().from_json(arg)) - - def test_float(self): - self.assertEquals(.23, '.23') - self.assertEquals(5, '5') - self.assertEquals(0, '0.0') - self.assertEquals(-1023.22, '-1023.22') - - def test_none(self): - self.assertEquals(None, None) - self.assertEquals(None, 'abc') - self.assertEquals(None, '[1]') - - -class StringyBooleanTest(unittest.TestCase): - - def assertEquals(self, expected, arg): - self.assertEqual(expected, StringyBoolean().from_json(arg)) - - def test_false(self): - self.assertEquals(False, "false") - self.assertEquals(False, "False") - self.assertEquals(False, "") - self.assertEquals(False, "hahahahah") - - def test_true(self): - self.assertEquals(True, "true") - self.assertEquals(True, "TruE") - - def test_pass_through(self): - self.assertEquals(123, 123) - +class TimeInfoTest(unittest.TestCase): + def test_time_info(self): + due_date = datetime.datetime(2000, 4, 14, 10, tzinfo=UTC()) + grace_pd_string = '1 day 12 hours 59 minutes 59 seconds' + timeinfo = TimeInfo(due_date, grace_pd_string) + self.assertEqual(timeinfo.close_date, + due_date + Timedelta().from_json(grace_pd_string)) diff --git a/common/lib/xmodule/xmodule/tests/test_html_module.py b/common/lib/xmodule/xmodule/tests/test_html_module.py index e56e9babe7..ea6b358f3b 100644 --- a/common/lib/xmodule/xmodule/tests/test_html_module.py +++ b/common/lib/xmodule/xmodule/tests/test_html_module.py @@ -8,14 +8,13 @@ from xmodule.modulestore import Location from . import test_system class HtmlModuleSubstitutionTestCase(unittest.TestCase): - location = Location(["i4x", "edX", "toy", "html", "simple_html"]) descriptor = Mock() def test_substitution_works(self): sample_xml = '''%%USER_ID%%''' module_data = {'data': sample_xml} module_system = test_system() - module = HtmlModule(module_system, self.location, self.descriptor, module_data) + module = HtmlModule(module_system, self.descriptor, module_data) self.assertEqual(module.get_html(), str(module_system.anonymous_student_id)) @@ -26,7 +25,7 @@ class HtmlModuleSubstitutionTestCase(unittest.TestCase): ''' module_data = {'data': sample_xml} - module = HtmlModule(test_system(), self.location, self.descriptor, module_data) + module = HtmlModule(test_system(), self.descriptor, module_data) self.assertEqual(module.get_html(), sample_xml) @@ -35,6 +34,6 @@ class HtmlModuleSubstitutionTestCase(unittest.TestCase): module_data = {'data': sample_xml} module_system = test_system() module_system.anonymous_student_id = None - module = HtmlModule(module_system, self.location, self.descriptor, module_data) + module = HtmlModule(module_system, self.descriptor, module_data) self.assertEqual(module.get_html(), sample_xml) diff --git a/common/lib/xmodule/xmodule/tests/test_import.py b/common/lib/xmodule/xmodule/tests/test_import.py index bb0d200bb6..677dd4d80e 100644 --- a/common/lib/xmodule/xmodule/tests/test_import.py +++ b/common/lib/xmodule/xmodule/tests/test_import.py @@ -13,6 +13,8 @@ from xmodule.modulestore.inheritance import compute_inherited_metadata from xmodule.fields import Date from .test_export import DATA_DIR +import datetime +from django.utils.timezone import UTC ORG = 'test_org' COURSE = 'test_course' @@ -40,7 +42,7 @@ class DummySystem(ImportSystem): load_error_modules=load_error_modules, ) - def render_template(self, template, context): + def render_template(self, _template, _context): raise Exception("Shouldn't be called") @@ -62,6 +64,7 @@ class BaseCourseTestCase(unittest.TestCase): class ImportTestCase(BaseCourseTestCase): + date = Date() def test_fallback(self): '''Check that malformed xml loads as an ErrorDescriptor.''' @@ -145,15 +148,18 @@ class ImportTestCase(BaseCourseTestCase): descriptor = system.process_xml(start_xml) compute_inherited_metadata(descriptor) + # pylint: disable=W0212 print(descriptor, descriptor._model_data) - self.assertEqual(descriptor.lms.due, Date().from_json(v)) + self.assertEqual(descriptor.lms.due, ImportTestCase.date.from_json(v)) # Check that the child inherits due correctly child = descriptor.get_children()[0] - self.assertEqual(child.lms.due, Date().from_json(v)) + self.assertEqual(child.lms.due, ImportTestCase.date.from_json(v)) self.assertEqual(child._inheritable_metadata, child._inherited_metadata) self.assertEqual(2, len(child._inherited_metadata)) - self.assertEqual('1970-01-01T00:00:00Z', child._inherited_metadata['start']) + self.assertLessEqual(ImportTestCase.date.from_json( + child._inherited_metadata['start']), + datetime.datetime.now(UTC())) self.assertEqual(v, child._inherited_metadata['due']) # Now export and check things @@ -209,9 +215,13 @@ class ImportTestCase(BaseCourseTestCase): # Check that the child does not inherit a value for due child = descriptor.get_children()[0] self.assertEqual(child.lms.due, None) + # pylint: disable=W0212 self.assertEqual(child._inheritable_metadata, child._inherited_metadata) self.assertEqual(1, len(child._inherited_metadata)) - self.assertEqual('1970-01-01T00:00:00Z', child._inherited_metadata['start']) + # why do these tests look in the internal structure v just calling child.start? + self.assertLessEqual( + ImportTestCase.date.from_json(child._inherited_metadata['start']), + datetime.datetime.now(UTC())) def test_metadata_override_default(self): """ @@ -230,14 +240,17 @@ class ImportTestCase(BaseCourseTestCase): '''.format(due=course_due, org=ORG, course=COURSE, url_name=url_name) descriptor = system.process_xml(start_xml) child = descriptor.get_children()[0] + # pylint: disable=W0212 child._model_data['due'] = child_due compute_inherited_metadata(descriptor) - self.assertEqual(descriptor.lms.due, Date().from_json(course_due)) - self.assertEqual(child.lms.due, Date().from_json(child_due)) + self.assertEqual(descriptor.lms.due, ImportTestCase.date.from_json(course_due)) + self.assertEqual(child.lms.due, ImportTestCase.date.from_json(child_due)) # Test inherited metadata. Due does not appear here (because explicitly set on child). self.assertEqual(1, len(child._inherited_metadata)) - self.assertEqual('1970-01-01T00:00:00Z', child._inherited_metadata['start']) + self.assertLessEqual( + ImportTestCase.date.from_json(child._inherited_metadata['start']), + datetime.datetime.now(UTC())) # Test inheritable metadata. This has the course inheritable value for due. self.assertEqual(2, len(child._inheritable_metadata)) self.assertEqual(course_due, child._inheritable_metadata['due']) diff --git a/common/lib/xmodule/xmodule/tests/test_logic.py b/common/lib/xmodule/xmodule/tests/test_logic.py index e60af63921..6fb331b3cf 100644 --- a/common/lib/xmodule/xmodule/tests/test_logic.py +++ b/common/lib/xmodule/xmodule/tests/test_logic.py @@ -1,15 +1,14 @@ # -*- coding: utf-8 -*- +# pylint: disable=W0232 """Test for Xmodule functional logic.""" import json import unittest -from lxml import etree - from xmodule.poll_module import PollDescriptor from xmodule.conditional_module import ConditionalDescriptor from xmodule.word_cloud_module import WordCloudDescriptor -from xmodule.videoalpha_module import VideoAlphaDescriptor +from xmodule.tests import test_system class PostData: """Class which emulate postdata.""" @@ -17,6 +16,7 @@ class PostData: self.dict_data = dict_data def getlist(self, key): + """Get data by key from `self.dict_data`.""" return self.dict_data.get(key) @@ -27,23 +27,26 @@ class LogicTest(unittest.TestCase): def setUp(self): class EmptyClass: + """Empty object.""" pass - self.system = None - self.location = None + self.system = test_system() self.descriptor = EmptyClass() self.xmodule_class = self.descriptor_class.module_class self.xmodule = self.xmodule_class( - self.system, self.location, - self.descriptor, self.raw_model_data + self.system, + self.descriptor, + self.raw_model_data ) def ajax_request(self, dispatch, get): + """Call Xmodule.handle_ajax.""" return json.loads(self.xmodule.handle_ajax(dispatch, get)) class PollModuleTest(LogicTest): + """Logic tests for Poll Xmodule.""" descriptor_class = PollDescriptor raw_model_data = { 'poll_answers': {'Yes': 1, 'Dont_know': 0, 'No': 0}, @@ -69,6 +72,7 @@ class PollModuleTest(LogicTest): class ConditionalModuleTest(LogicTest): + """Logic tests for Conditional Xmodule.""" descriptor_class = ConditionalDescriptor def test_ajax_request(self): @@ -83,6 +87,7 @@ class ConditionalModuleTest(LogicTest): class WordCloudModuleTest(LogicTest): + """Logic tests for Word Cloud Xmodule.""" descriptor_class = WordCloudDescriptor raw_model_data = { 'all_words': {'cat': 10, 'dog': 5, 'mom': 1, 'dad': 2}, @@ -91,8 +96,6 @@ class WordCloudModuleTest(LogicTest): } def test_bad_ajax_request(self): - - # TODO: move top global test. Formalize all our Xmodule errors. response = self.ajax_request('bad_dispatch', {}) self.assertDictEqual(response, { 'status': 'fail', @@ -118,34 +121,6 @@ class WordCloudModuleTest(LogicTest): {'text': 'cat', 'size': 12, 'percent': 54.0}] ) - self.assertEqual(100.0, sum(i['percent'] for i in response['top_words']) ) - - -class VideoAlphaModuleTest(LogicTest): - descriptor_class = VideoAlphaDescriptor - - raw_model_data = { - 'data': '' - } - - def test_get_timeframe_no_parameters(self): - xmltree = etree.fromstring('test') - output = self.xmodule._get_timeframe(xmltree) - self.assertEqual(output, ('', '')) - - def test_get_timeframe_with_one_parameter(self): - xmltree = etree.fromstring( - 'test' - ) - output = self.xmodule._get_timeframe(xmltree) - self.assertEqual(output, (247, '')) - - def test_get_timeframe_with_two_parameters(self): - xmltree = etree.fromstring( - '''test''' - ) - output = self.xmodule._get_timeframe(xmltree) - self.assertEqual(output, (247, 47079)) + self.assertEqual( + 100.0, + sum(i['percent'] for i in response['top_words'])) diff --git a/common/lib/xmodule/xmodule/tests/test_mako_module.py b/common/lib/xmodule/xmodule/tests/test_mako_module.py new file mode 100644 index 0000000000..7ba023bda7 --- /dev/null +++ b/common/lib/xmodule/xmodule/tests/test_mako_module.py @@ -0,0 +1,22 @@ +""" Test mako_module.py """ + +from unittest import TestCase +from mock import Mock + +from xmodule.mako_module import MakoModuleDescriptor + + +class MakoModuleTest(TestCase): + """ Test MakoModuleDescriptor """ + + def test_render_template_check(self): + mock_system = Mock() + mock_system.render_template = None + + with self.assertRaises(TypeError): + MakoModuleDescriptor(mock_system, {}) + + del mock_system.render_template + + with self.assertRaises(TypeError): + MakoModuleDescriptor(mock_system, {}) diff --git a/common/lib/xmodule/xmodule/tests/test_progress.py b/common/lib/xmodule/xmodule/tests/test_progress.py index 4bb663ad85..97ec00b13e 100644 --- a/common/lib/xmodule/xmodule/tests/test_progress.py +++ b/common/lib/xmodule/xmodule/tests/test_progress.py @@ -134,6 +134,6 @@ class ModuleProgressTest(unittest.TestCase): ''' def test_xmodule_default(self): '''Make sure default get_progress exists, returns None''' - xm = x_module.XModule(test_system(), 'a://b/c/d/e', None, {}) + xm = x_module.XModule(test_system(), None, {'location': 'a://b/c/d/e'}) p = xm.get_progress() self.assertEqual(p, None) diff --git a/lms/djangoapps/courseware/tests/test_video_xml.py b/common/lib/xmodule/xmodule/tests/test_video_xml.py similarity index 98% rename from lms/djangoapps/courseware/tests/test_video_xml.py rename to common/lib/xmodule/xmodule/tests/test_video_xml.py index c199a0aee1..fae1580323 100644 --- a/lms/djangoapps/courseware/tests/test_video_xml.py +++ b/common/lib/xmodule/xmodule/tests/test_video_xml.py @@ -47,13 +47,13 @@ class VideoFactory(object): """Method return Video Xmodule instance.""" location = Location(["i4x", "edX", "video", "default", "SampleProblem1"]) - model_data = {'data': VideoFactory.sample_problem_xml_youtube} + model_data = {'data': VideoFactory.sample_problem_xml_youtube, 'location': location} descriptor = Mock(weight="1") system = test_system() system.render_template = lambda template, context: context - module = VideoModule(system, location, descriptor, model_data) + module = VideoModule(system, descriptor, model_data) return module diff --git a/common/lib/xmodule/xmodule/tests/test_xml_module.py b/common/lib/xmodule/xmodule/tests/test_xml_module.py index dd59ca2b48..46410def8e 100644 --- a/common/lib/xmodule/xmodule/tests/test_xml_module.py +++ b/common/lib/xmodule/xmodule/tests/test_xml_module.py @@ -2,11 +2,12 @@ #pylint: disable=C0111 from xmodule.x_module import XModuleFields -from xblock.core import Scope, String, Object, Boolean -from xmodule.fields import Date, StringyInteger, StringyFloat -from xmodule.xml_module import XmlDescriptor +from xblock.core import Scope, String, Dict, Boolean, Integer, Float, Any, List +from xmodule.fields import Date, Timedelta +from xmodule.xml_module import XmlDescriptor, serialize_field, deserialize_field import unittest from .import test_system +from nose.tools import assert_equals from mock import Mock @@ -17,11 +18,11 @@ class CrazyJsonString(String): class TestFields(object): # Will be returned by editable_metadata_fields. - max_attempts = StringyInteger(scope=Scope.settings, default=1000, values={'min': 1, 'max': 10}) + max_attempts = Integer(scope=Scope.settings, default=1000, values={'min': 1, 'max': 10}) # Will not be returned by editable_metadata_fields because filtered out by non_editable_metadata_fields. due = Date(scope=Scope.settings) # Will not be returned by editable_metadata_fields because is not Scope.settings. - student_answers = Object(scope=Scope.user_state) + student_answers = Dict(scope=Scope.user_state) # Will be returned, and can override the inherited value from XModule. display_name = String(scope=Scope.settings, default='local default', display_name='Local Display Name', help='local help') @@ -33,9 +34,9 @@ class TestFields(object): {'display_name': 'second', 'value': 'value b'}] ) # Used for testing select type - float_select = StringyFloat(scope=Scope.settings, default=.999, values=[1.23, 0.98]) + float_select = Float(scope=Scope.settings, default=.999, values=[1.23, 0.98]) # Used for testing float type - float_non_select = StringyFloat(scope=Scope.settings, default=.999, values={'min': 0, 'step': .3}) + float_non_select = Float(scope=Scope.settings, default=.999, values={'min': 0, 'step': .3}) # Used for testing that Booleans get mapped to select type boolean_select = Boolean(scope=Scope.settings) @@ -104,7 +105,7 @@ class EditableMetadataFieldsTest(unittest.TestCase): def test_type_and_options(self): # test_display_name_field verifies that a String field is of type "Generic". - # test_integer_field verifies that a StringyInteger field is of type "Integer". + # test_integer_field verifies that a Integer field is of type "Integer". descriptor = self.get_descriptor({}) editable_fields = descriptor.editable_metadata_fields @@ -141,7 +142,7 @@ class EditableMetadataFieldsTest(unittest.TestCase): def get_xml_editable_fields(self, model_data): system = test_system() system.render_template = Mock(return_value="
    Test Template HTML
    ") - return XmlDescriptor(system=system, location=None, model_data=model_data).editable_metadata_fields + return XmlDescriptor(runtime=system, model_data=model_data).editable_metadata_fields def get_descriptor(self, model_data): class TestModuleDescriptor(TestFields, XmlDescriptor): @@ -153,7 +154,7 @@ class EditableMetadataFieldsTest(unittest.TestCase): system = test_system() system.render_template = Mock(return_value="
    Test Template HTML
    ") - return TestModuleDescriptor(system=system, location=None, model_data=model_data) + return TestModuleDescriptor(runtime=system, model_data=model_data) def assert_field_values(self, editable_fields, name, field, explicitly_set, inheritable, value, default_value, type='Generic', options=[]): @@ -171,3 +172,194 @@ class EditableMetadataFieldsTest(unittest.TestCase): self.assertEqual(explicitly_set, test_field['explicitly_set']) self.assertEqual(inheritable, test_field['inheritable']) + + +class TestSerialize(unittest.TestCase): + """ Tests the serialize, method, which is not dependent on type. """ + def test_serialize(self): + assert_equals('null', serialize_field(None)) + assert_equals('-2', serialize_field(-2)) + assert_equals('"2"', serialize_field('2')) + assert_equals('-3.41', serialize_field(-3.41)) + assert_equals('"2.589"', serialize_field('2.589')) + assert_equals('false', serialize_field(False)) + assert_equals('"false"', serialize_field('false')) + assert_equals('"fAlse"', serialize_field('fAlse')) + assert_equals('"hat box"', serialize_field('hat box')) + assert_equals('{"bar": "hat", "frog": "green"}', serialize_field({'bar': 'hat', 'frog' : 'green'})) + assert_equals('[3.5, 5.6]', serialize_field([3.5, 5.6])) + assert_equals('["foo", "bar"]', serialize_field(['foo', 'bar'])) + assert_equals('"2012-12-31T23:59:59Z"', serialize_field("2012-12-31T23:59:59Z")) + assert_equals('"1 day 12 hours 59 minutes 59 seconds"', + serialize_field("1 day 12 hours 59 minutes 59 seconds")) + + +class TestDeserialize(unittest.TestCase): + def assertDeserializeEqual(self, expected, arg): + """ + Asserts the result of deserialize_field. + """ + assert_equals(expected, deserialize_field(self.test_field(), arg)) + + + def assertDeserializeNonString(self): + """ + Asserts input value is returned for None or something that is not a string. + For all types, 'null' is also always returned as None. + """ + self.assertDeserializeEqual(None, None) + self.assertDeserializeEqual(3.14, 3.14) + self.assertDeserializeEqual(True, True) + self.assertDeserializeEqual([10], [10]) + self.assertDeserializeEqual({}, {}) + self.assertDeserializeEqual([], []) + self.assertDeserializeEqual(None, 'null') + + +class TestDeserializeInteger(TestDeserialize): + """ Tests deserialize as related to Integer type. """ + + test_field = Integer + + def test_deserialize(self): + self.assertDeserializeEqual(-2, '-2') + self.assertDeserializeEqual("450", '"450"') + + # False can be parsed as a int (converts to 0) + self.assertDeserializeEqual(False, 'false') + # True can be parsed as a int (converts to 1) + self.assertDeserializeEqual(True, 'true') + # 2.78 can be converted to int, so the string will be deserialized + self.assertDeserializeEqual(-2.78, '-2.78') + + + def test_deserialize_unsupported_types(self): + self.assertDeserializeEqual('[3]', '[3]') + # '2.78' cannot be converted to int, so input value is returned + self.assertDeserializeEqual('"-2.78"', '"-2.78"') + # 'false' cannot be converted to int, so input value is returned + self.assertDeserializeEqual('"false"', '"false"') + self.assertDeserializeNonString() + + +class TestDeserializeFloat(TestDeserialize): + """ Tests deserialize as related to Float type. """ + + test_field = Float + + def test_deserialize(self): + self.assertDeserializeEqual( -2, '-2') + self.assertDeserializeEqual("450", '"450"') + self.assertDeserializeEqual(-2.78, '-2.78') + self.assertDeserializeEqual("0.45", '"0.45"') + + # False can be parsed as a float (converts to 0) + self.assertDeserializeEqual(False, 'false') + # True can be parsed as a float (converts to 1) + self.assertDeserializeEqual( True, 'true') + + def test_deserialize_unsupported_types(self): + self.assertDeserializeEqual('[3]', '[3]') + # 'false' cannot be converted to float, so input value is returned + self.assertDeserializeEqual('"false"', '"false"') + self.assertDeserializeNonString() + + +class TestDeserializeBoolean(TestDeserialize): + """ Tests deserialize as related to Boolean type. """ + + test_field = Boolean + + def test_deserialize(self): + # json.loads converts the value to Python bool + self.assertDeserializeEqual(False, 'false') + self.assertDeserializeEqual(True, 'true') + + # json.loads fails, string value is returned. + self.assertDeserializeEqual('False', 'False') + self.assertDeserializeEqual('True', 'True') + + # json.loads deserializes as a string + self.assertDeserializeEqual('false', '"false"') + self.assertDeserializeEqual('fAlse', '"fAlse"') + self.assertDeserializeEqual("TruE", '"TruE"') + + # 2.78 can be converted to a bool, so the string will be deserialized + self.assertDeserializeEqual(-2.78, '-2.78') + + self.assertDeserializeNonString() + + +class TestDeserializeString(TestDeserialize): + """ Tests deserialize as related to String type. """ + + test_field = String + + def test_deserialize(self): + self.assertDeserializeEqual('hAlf', '"hAlf"') + self.assertDeserializeEqual('false', '"false"') + self.assertDeserializeEqual('single quote', 'single quote') + + def test_deserialize_unsupported_types(self): + self.assertDeserializeEqual('3.4', '3.4') + self.assertDeserializeEqual('false', 'false') + self.assertDeserializeEqual('2', '2') + self.assertDeserializeEqual('[3]', '[3]') + self.assertDeserializeNonString() + + +class TestDeserializeAny(TestDeserialize): + """ Tests deserialize as related to Any type. """ + + test_field = Any + + def test_deserialize(self): + self.assertDeserializeEqual('hAlf', '"hAlf"') + self.assertDeserializeEqual('false', '"false"') + self.assertDeserializeEqual({'bar': 'hat', 'frog' : 'green'}, '{"bar": "hat", "frog": "green"}') + self.assertDeserializeEqual([3.5, 5.6], '[3.5, 5.6]') + self.assertDeserializeEqual('[', '[') + self.assertDeserializeEqual(False, 'false') + self.assertDeserializeEqual(3.4, '3.4') + self.assertDeserializeNonString() + + +class TestDeserializeList(TestDeserialize): + """ Tests deserialize as related to List type. """ + + test_field = List + + def test_deserialize(self): + self.assertDeserializeEqual(['foo', 'bar'], '["foo", "bar"]') + self.assertDeserializeEqual([3.5, 5.6], '[3.5, 5.6]') + self.assertDeserializeEqual([], '[]') + + def test_deserialize_unsupported_types(self): + self.assertDeserializeEqual('3.4', '3.4') + self.assertDeserializeEqual('false', 'false') + self.assertDeserializeEqual('2', '2') + self.assertDeserializeNonString() + + +class TestDeserializeDate(TestDeserialize): + """ Tests deserialize as related to Date type. """ + + test_field = Date + + def test_deserialize(self): + self.assertDeserializeEqual('2012-12-31T23:59:59Z', "2012-12-31T23:59:59Z") + self.assertDeserializeEqual('2012-12-31T23:59:59Z', '"2012-12-31T23:59:59Z"') + self.assertDeserializeNonString() + + +class TestDeserializeTimedelta(TestDeserialize): + """ Tests deserialize as related to Timedelta type. """ + + test_field = Timedelta + + def test_deserialize(self): + self.assertDeserializeEqual('1 day 12 hours 59 minutes 59 seconds', + '1 day 12 hours 59 minutes 59 seconds') + self.assertDeserializeEqual('1 day 12 hours 59 minutes 59 seconds', + '"1 day 12 hours 59 minutes 59 seconds"') + self.assertDeserializeNonString() diff --git a/common/lib/xmodule/xmodule/timeinfo.py b/common/lib/xmodule/xmodule/timeinfo.py index a7743b6bee..8f4d99506a 100644 --- a/common/lib/xmodule/xmodule/timeinfo.py +++ b/common/lib/xmodule/xmodule/timeinfo.py @@ -1,7 +1,5 @@ -from .timeparse import parse_timedelta -from xmodule.util.date_utils import time_to_datetime - import logging +from xmodule.fields import Timedelta log = logging.getLogger(__name__) class TimeInfo(object): @@ -15,16 +13,17 @@ class TimeInfo(object): self.close_date - the real due date """ + _delta_standin = Timedelta() def __init__(self, due_date, grace_period_string): if due_date is not None: - self.display_due_date = time_to_datetime(due_date) + self.display_due_date = due_date else: self.display_due_date = None if grace_period_string is not None and self.display_due_date: try: - self.grace_period = parse_timedelta(grace_period_string) + self.grace_period = TimeInfo._delta_standin.from_json(grace_period_string) self.close_date = self.display_due_date + self.grace_period except: log.error("Error parsing the grace period {0}".format(grace_period_string)) diff --git a/common/lib/xmodule/xmodule/timelimit_module.py b/common/lib/xmodule/xmodule/timelimit_module.py index 732aa25e2e..6be14e7574 100644 --- a/common/lib/xmodule/xmodule/timelimit_module.py +++ b/common/lib/xmodule/xmodule/timelimit_module.py @@ -123,9 +123,6 @@ class TimeLimitDescriptor(TimeLimitFields, XMLEditingDescriptor, XmlDescriptor): module_class = TimeLimitModule - # For remembering when a student started, and when they should end - stores_state = True - @classmethod def definition_from_xml(cls, xml_object, system): children = [] diff --git a/common/lib/xmodule/xmodule/timeparse.py b/common/lib/xmodule/xmodule/timeparse.py deleted file mode 100644 index 15a8233ccb..0000000000 --- a/common/lib/xmodule/xmodule/timeparse.py +++ /dev/null @@ -1,47 +0,0 @@ -""" -Helper functions for handling time in the format we like. -""" -import time -import re -from datetime import timedelta - -TIME_FORMAT = "%Y-%m-%dT%H:%M" - -TIMEDELTA_REGEX = re.compile(r'^((?P\d+?) day(?:s?))?(\s)?((?P\d+?) hour(?:s?))?(\s)?((?P\d+?) minute(?:s)?)?(\s)?((?P\d+?) second(?:s)?)?$') - -def parse_time(time_str): - """ - Takes a time string in TIME_FORMAT - - Returns it as a time_struct. - - Raises ValueError if the string is not in the right format. - """ - return time.strptime(time_str, TIME_FORMAT) - - -def stringify_time(time_struct): - """ - Convert a time struct to a string - """ - return time.strftime(TIME_FORMAT, time_struct) - -def parse_timedelta(time_str): - """ - time_str: A string with the following components: - day[s] (optional) - hour[s] (optional) - minute[s] (optional) - second[s] (optional) - - Returns a datetime.timedelta parsed from the string - """ - parts = TIMEDELTA_REGEX.match(time_str) - if not parts: - return - parts = parts.groupdict() - time_params = {} - for (name, param) in parts.iteritems(): - if param: - time_params[name] = int(param) - return timedelta(**time_params) diff --git a/common/lib/xmodule/xmodule/util/date_utils.py b/common/lib/xmodule/xmodule/util/date_utils.py index 1e64856e8f..933226ede6 100644 --- a/common/lib/xmodule/xmodule/util/date_utils.py +++ b/common/lib/xmodule/xmodule/util/date_utils.py @@ -1,34 +1,22 @@ -import time -import datetime - - -def get_default_time_display(time_struct, show_timezone=True): +def get_default_time_display(dt, show_timezone=True): """ - Converts a time struct to a string representation. This is the default + Converts a datetime to a string representation. This is the default representation used in Studio and LMS. It is of the form "Apr 09, 2013 at 16:00" or "Apr 09, 2013 at 16:00 UTC", depending on the value of show_timezone. - If None is passed in for time_struct, an empty string will be returned. + If None is passed in for dt, an empty string will be returned. The default value of show_timezone is True. """ - timezone = "" if time_struct is None or not show_timezone else " UTC" - return get_time_struct_display(time_struct, "%b %d, %Y at %H:%M") + timezone - - -def get_time_struct_display(time_struct, format): - """ - Converts a time struct to a string based on the given format. - - If None is passed in, an empty string will be returned. - """ - return '' if time_struct is None else time.strftime(format, time_struct) - - -def time_to_datetime(time_struct): - """ - Convert a time struct to a datetime. - - If None is passed in, None will be returned. - """ - return datetime.datetime(*time_struct[:6]) if time_struct else None + if dt is None: + return "" + timezone = "" + if dt is not None and show_timezone: + if dt.tzinfo is not None: + try: + timezone = " " + dt.tzinfo.tzname(dt) + except NotImplementedError: + timezone = dt.strftime('%z') + else: + timezone = " UTC" + return dt.strftime("%b %d, %Y at %H:%M") + timezone diff --git a/common/lib/xmodule/xmodule/video_module.py b/common/lib/xmodule/xmodule/video_module.py index 994611c676..77d83ca1f3 100644 --- a/common/lib/xmodule/xmodule/video_module.py +++ b/common/lib/xmodule/xmodule/video_module.py @@ -138,5 +138,4 @@ class VideoModule(VideoFields, XModule): class VideoDescriptor(VideoFields, RawDescriptor): """Descriptor for `VideoModule`.""" module_class = VideoModule - stores_state = True template_dir_name = "video" diff --git a/common/lib/xmodule/xmodule/videoalpha_module.py b/common/lib/xmodule/xmodule/videoalpha_module.py index 16230480a7..a64e094a58 100644 --- a/common/lib/xmodule/xmodule/videoalpha_module.py +++ b/common/lib/xmodule/xmodule/videoalpha_module.py @@ -1,3 +1,15 @@ +# pylint: disable=W0223 +"""VideoAlpha is ungraded Xmodule for support video content. +It's new improved video module, which support additional feature: + +- Can play non-YouTube video sources via in-browser HTML5 video player. +- YouTube defaults to HTML5 mode from the start. +- Speed changes in both YouTube and non-YouTube videos happen via +in-browser HTML5 video method (when in HTML5 mode). +- Navigational subtitles can be disabled altogether via an attribute +in XML. +""" + import json import logging @@ -5,6 +17,7 @@ from lxml import etree from pkg_resources import resource_string, resource_listdir from django.http import Http404 +from django.conf import settings from xmodule.x_module import XModule from xmodule.raw_module import RawDescriptor @@ -20,6 +33,7 @@ log = logging.getLogger(__name__) class VideoAlphaFields(object): + """Fields for `VideoAlphaModule` and `VideoAlphaDescriptor`.""" data = String(help="XML data for the problem", scope=Scope.content) position = Integer(help="Current position in the video", scope=Scope.user_state, default=0) display_name = String(help="Display name for this module", scope=Scope.settings) @@ -67,7 +81,7 @@ class VideoAlphaModule(VideoAlphaFields, XModule): 'ogv': self._get_source(xmltree, ['ogv']), } self.track = self._get_track(xmltree) - self.start_time, self.end_time = self._get_timeframe(xmltree) + self.start_time, self.end_time = self.get_timeframe(xmltree) def _get_source(self, xmltree, exts=None): """Find the first valid source, which ends with one of `exts`.""" @@ -76,7 +90,7 @@ class VideoAlphaModule(VideoAlphaFields, XModule): return self._get_first_external(xmltree, 'source', condition) def _get_track(self, xmltree): - # find the first valid track + """Find the first valid track.""" return self._get_first_external(xmltree, 'track') def _get_first_external(self, xmltree, tag, condition=bool): @@ -92,39 +106,33 @@ class VideoAlphaModule(VideoAlphaFields, XModule): break return result - def _get_timeframe(self, xmltree): + def get_timeframe(self, xmltree): """ Converts 'start_time' and 'end_time' parameters in video tag to seconds. If there are no parameters, returns empty string. """ - def parse_time(s): + def parse_time(str_time): """Converts s in '12:34:45' format to seconds. If s is None, returns empty string""" - if s is None: + if str_time is None: return '' else: - x = time.strptime(s, '%H:%M:%S') + obj_time = time.strptime(str_time, '%H:%M:%S') return datetime.timedelta( - hours=x.tm_hour, - minutes=x.tm_min, - seconds=x.tm_sec + hours=obj_time.tm_hour, + minutes=obj_time.tm_min, + seconds=obj_time.tm_sec ).total_seconds() return parse_time(xmltree.get('start_time')), parse_time(xmltree.get('end_time')) def handle_ajax(self, dispatch, get): - """Handle ajax calls to this video. - TODO (vshnayder): This is not being called right now, so the - position is not being saved. - """ + """This is not being called right now and we raise 404 error.""" log.debug(u"GET {0}".format(get)) log.debug(u"DISPATCH {0}".format(dispatch)) - if dispatch == 'goto_position': - self.position = int(float(get['position'])) - log.info(u"NEW POSITION {0}".format(self.position)) - return json.dumps({'success': True}) raise Http404() def get_instance_state(self): + """Return information about state (position).""" return json.dumps({'position': self.position}) def get_html(self): @@ -142,16 +150,18 @@ class VideoAlphaModule(VideoAlphaFields, XModule): 'sources': self.sources, 'track': self.track, 'display_name': self.display_name_with_default, - # TODO (cpennington): This won't work when we move to data that isn't on the filesystem + # This won't work when we move to data that + # isn't on the filesystem 'data_dir': getattr(self, 'data_dir', None), 'caption_asset_path': caption_asset_path, 'show_captions': self.show_captions, 'start': self.start_time, - 'end': self.end_time + 'end': self.end_time, + 'autoplay': settings.MITX_FEATURES.get('AUTOPLAY_VIDEOS', True) }) class VideoAlphaDescriptor(VideoAlphaFields, RawDescriptor): + """Descriptor for `VideoAlphaModule`.""" module_class = VideoAlphaModule - stores_state = True template_dir_name = "videoalpha" diff --git a/common/lib/xmodule/xmodule/word_cloud_module.py b/common/lib/xmodule/xmodule/word_cloud_module.py index e38b8cf195..ac5b3051de 100644 --- a/common/lib/xmodule/xmodule/word_cloud_module.py +++ b/common/lib/xmodule/xmodule/word_cloud_module.py @@ -14,8 +14,7 @@ from xmodule.raw_module import RawDescriptor from xmodule.editing_module import MetadataOnlyEditingDescriptor from xmodule.x_module import XModule -from xblock.core import Scope, Object, Boolean, List -from fields import StringyBoolean, StringyInteger +from xblock.core import Scope, Dict, Boolean, List, Integer log = logging.getLogger(__name__) @@ -32,21 +31,21 @@ def pretty_bool(value): class WordCloudFields(object): """XFields for word cloud.""" - num_inputs = StringyInteger( + num_inputs = Integer( display_name="Inputs", help="Number of text boxes available for students to input words/sentences.", scope=Scope.settings, default=5, values={"min": 1} ) - num_top_words = StringyInteger( + num_top_words = Integer( display_name="Maximum Words", help="Maximum number of words to be displayed in generated word cloud.", scope=Scope.settings, default=250, values={"min": 1} ) - display_student_percents = StringyBoolean( + display_student_percents = Boolean( display_name="Show Percents", help="Statistics are shown for entered words near that word.", scope=Scope.settings, @@ -64,11 +63,11 @@ class WordCloudFields(object): scope=Scope.user_state, default=[] ) - all_words = Object( + all_words = Dict( help="All possible words from all students.", scope=Scope.content ) - top_words = Object( + top_words = Dict( help="Top num_top_words words for word cloud.", scope=Scope.content ) @@ -239,4 +238,3 @@ class WordCloudDescriptor(MetadataOnlyEditingDescriptor, RawDescriptor, WordClou """Descriptor for WordCloud Xmodule.""" module_class = WordCloudModule template_dir_name = 'word_cloud' - stores_state = True diff --git a/common/lib/xmodule/xmodule/x_module.py b/common/lib/xmodule/xmodule/x_module.py index 3ae70543cb..3edc22df43 100644 --- a/common/lib/xmodule/xmodule/x_module.py +++ b/common/lib/xmodule/xmodule/x_module.py @@ -10,7 +10,7 @@ from pkg_resources import resource_listdir, resource_string, resource_isdir from xmodule.modulestore import Location from xmodule.modulestore.exceptions import ItemNotFoundError -from xblock.core import XBlock, Scope, String, Integer, Float +from xblock.core import XBlock, Scope, String, Integer, Float, ModelType log = logging.getLogger(__name__) @@ -19,6 +19,23 @@ def dummy_track(event_type, event): pass +class LocationField(ModelType): + """ + XBlock field for storing Location values + """ + def from_json(self, value): + """ + Parse the json value as a Location + """ + return Location(value) + + def to_json(self, value): + """ + Store the Location as a url string in json + """ + return value.url() + + class HTMLSnippet(object): """ A base class defining an interface for an object that is able to present an @@ -87,6 +104,16 @@ class XModuleFields(object): default=None ) + # Please note that in order to be compatible with XBlocks more generally, + # the LMS and CMS shouldn't be using this field. It's only for internal + # consumption by the XModules themselves + location = LocationField( + display_name="Location", + help="This is the location id for the XModule.", + scope=Scope.content, + default=Location(None), + ) + class XModule(XModuleFields, HTMLSnippet, XBlock): ''' Implements a generic learning module. @@ -106,24 +133,20 @@ class XModule(XModuleFields, HTMLSnippet, XBlock): icon_class = 'other' - def __init__(self, system, location, descriptor, model_data): + def __init__(self, runtime, descriptor, model_data): ''' Construct a new xmodule - system: A ModuleSystem allowing access to external resources - - location: Something Location-like that identifies this xmodule + runtime: An XBlock runtime allowing access to external resources descriptor: the XModuleDescriptor that this module is an instance of. - TODO (vshnayder): remove the definition parameter and location--they - can come from the descriptor. model_data: A dictionary-like object that maps field names to values for those fields. ''' + super(XModule, self).__init__(runtime, model_data) self._model_data = model_data - self.system = system - self.location = Location(location) + self.system = runtime self.descriptor = descriptor self.url_name = self.location.name self.category = self.location.category @@ -254,19 +277,6 @@ class XModule(XModuleFields, HTMLSnippet, XBlock): get is a dictionary-like object ''' return "" - # cdodge: added to support dynamic substitutions of - # links for courseware assets (e.g. images). is passed through from lxml.html parser - def rewrite_content_links(self, link): - # see if we start with our format, e.g. 'xasset:' - if link.startswith(XASSET_SRCREF_PREFIX): - # yes, then parse out the name - name = link[len(XASSET_SRCREF_PREFIX):] - loc = Location(self.location) - # resolve the reference to our internal 'filepath' which - link = StaticContent.compute_location_filename(loc.org, loc.course, name) - - return link - def policy_key(location): """ @@ -327,10 +337,6 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): # Attributes for inspection of the descriptor - # Indicates whether the xmodule state should be - # stored in a database (independent of shared state) - stores_state = False - # This indicates whether the xmodule is a problem-type. # It should respond to max_score() and grade(). It can be graded or ungraded # (like a practice problem). @@ -344,13 +350,12 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): template_dir_name = "default" # Class level variable + + # True if this descriptor always requires recalculation of grades, for + # example if the score can change via an extrnal service, not just when the + # student interacts with the module on the page. A specific example is + # FoldIt, which posts grade-changing updates through a separate API. always_recalculate_grades = False - """ - Return whether this descriptor always requires recalculation of grades, for - example if the score can change via an extrnal service, not just when the - student interacts with the module on the page. A specific example is - FoldIt, which posts grade-changing updates through a separate API. - """ # VS[compat]. Backwards compatibility code that can go away after # importing 2012 courses. @@ -361,10 +366,7 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): } # ============================= STRUCTURAL MANIPULATION =================== - def __init__(self, - system, - location, - model_data): + def __init__(self, *args, **kwargs): """ Construct a new XModuleDescriptor. The only required arguments are the system, used for interaction with external resources, and the @@ -375,19 +377,17 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): This allows for maximal flexibility to add to the interface while preserving backwards compatibility. - system: A DescriptorSystem for interacting with external resources - - location: Something Location-like that identifies this xmodule + runtime: A DescriptorSystem for interacting with external resources model_data: A dictionary-like object that maps field names to values for those fields. + + XModuleDescriptor.__init__ takes the same arguments as xblock.core:XBlock.__init__ """ - self.system = system - self.location = Location(location) + super(XModuleDescriptor, self).__init__(*args, **kwargs) + self.system = self.runtime self.url_name = self.location.name self.category = self.location.category - self._model_data = model_data - self._child_instances = None @property @@ -445,7 +445,6 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): """ return self.module_class( system, - self.location, self, system.xblock_model_data(self), ) @@ -514,7 +513,9 @@ class XModuleDescriptor(XModuleFields, HTMLSnippet, ResourceTemplates, XBlock): else: model_data['data'] = definition['data'] - return cls(system=system, location=json_data['location'], model_data=model_data) + model_data['location'] = json_data['location'] + + return cls(system, model_data) @classmethod def _translate(cls, key): diff --git a/common/lib/xmodule/xmodule/xml_module.py b/common/lib/xmodule/xmodule/xml_module.py index 2f54bbf405..e1a0e0cf08 100644 --- a/common/lib/xmodule/xmodule/xml_module.py +++ b/common/lib/xmodule/xmodule/xml_module.py @@ -6,7 +6,7 @@ import sys from collections import namedtuple from lxml import etree -from xblock.core import Object, Scope +from xblock.core import Dict, Scope from xmodule.x_module import (XModuleDescriptor, policy_key) from xmodule.modulestore import Location from xmodule.modulestore.inheritance import own_metadata @@ -79,12 +79,53 @@ class AttrMap(_AttrMapBase): return _AttrMapBase.__new__(_cls, from_xml, to_xml) +def serialize_field(value): + """ + Return a string version of the value (where value is the JSON-formatted, internally stored value). + + By default, this is the result of calling json.dumps on the input value. + """ + return json.dumps(value) + + +def deserialize_field(field, value): + """ + Deserialize the string version to the value stored internally. + + Note that this is not the same as the value returned by from_json, as model types typically store + their value internally as JSON. By default, this method will return the result of calling json.loads + on the supplied value, unless json.loads throws a TypeError, or the type of the value returned by json.loads + is not supported for this class (from_json throws an Error). In either of those cases, this method returns + the input value. + """ + try: + deserialized = json.loads(value) + if deserialized is None: + return deserialized + try: + field.from_json(deserialized) + return deserialized + except (ValueError, TypeError): + # Support older serialized version, which was just a string, not result of json.dumps. + # If the deserialized version cannot be converted to the type (via from_json), + # just return the original value. For example, if a string value of '3.4' was + # stored for a String field (before we started storing the result of json.dumps), + # then it would be deserialized as 3.4, but 3.4 is not supported for a String + # field. Therefore field.from_json(3.4) will throw an Error, and we should + # actually return the original value of '3.4'. + return value + + except (ValueError, TypeError): + # Support older serialized version. + return value + + class XmlDescriptor(XModuleDescriptor): """ Mixin class for standardized parsing of from xml """ - xml_attributes = Object(help="Map of unhandled xml attributes, used only for storage between import and export", + xml_attributes = Dict(help="Map of unhandled xml attributes, used only for storage between import and export", default={}, scope=Scope.settings) # Extension to append to filename paths @@ -120,25 +161,15 @@ class XmlDescriptor(XModuleDescriptor): metadata_to_export_to_policy = ('discussion_topics') - # A dictionary mapping xml attribute names AttrMaps that describe how - # to import and export them - # Allow json to specify either the string "true", or the bool True. The string is preferred. - to_bool = lambda val: val == 'true' or val == True - from_bool = lambda val: str(val).lower() - bool_map = AttrMap(to_bool, from_bool) - - to_int = lambda val: int(val) - from_int = lambda val: str(val) - int_map = AttrMap(to_int, from_int) - xml_attribute_map = { - # type conversion: want True/False in python, "true"/"false" in xml - 'graded': bool_map, - 'hide_progress_tab': bool_map, - 'allow_anonymous': bool_map, - 'allow_anonymous_to_peers': bool_map, - 'show_timezone': bool_map, - } + @classmethod + def get_map_for_field(cls, attr): + for field in set(cls.fields + cls.lms.fields): + if field.name == attr: + from_xml = lambda val: deserialize_field(field, val) + to_xml = lambda val : serialize_field(val) + return AttrMap(from_xml, to_xml) + return AttrMap() @classmethod def definition_from_xml(cls, xml_object, system): @@ -188,7 +219,6 @@ class XmlDescriptor(XModuleDescriptor): filepath, location.url(), str(err)) raise Exception, msg, sys.exc_info()[2] - @classmethod def load_definition(cls, xml_object, system, location): '''Load a descriptor definition from the specified xml_object. @@ -224,7 +254,7 @@ class XmlDescriptor(XModuleDescriptor): definition, children = cls.definition_from_xml(definition_xml, system) if definition_metadata: definition['definition_metadata'] = definition_metadata - definition['filename'] = [ filepath, filename ] + definition['filename'] = [ filepath, filename ] return definition, children @@ -246,7 +276,7 @@ class XmlDescriptor(XModuleDescriptor): # don't load these continue - attr_map = cls.xml_attribute_map.get(attr, AttrMap()) + attr_map = cls.get_map_for_field(attr) metadata[attr] = attr_map.from_xml(val) return metadata @@ -258,7 +288,7 @@ class XmlDescriptor(XModuleDescriptor): through the attrmap. Updates the metadata dict in place. """ for attr in policy: - attr_map = cls.xml_attribute_map.get(attr, AttrMap()) + attr_map = cls.get_map_for_field(attr) metadata[cls._translate(attr)] = attr_map.from_xml(policy[attr]) @classmethod @@ -322,10 +352,10 @@ class XmlDescriptor(XModuleDescriptor): for key, value in metadata.items(): if key not in set(f.name for f in cls.fields + cls.lms.fields): model_data['xml_attributes'][key] = value + model_data['location'] = location return cls( system, - location, model_data, ) @@ -347,7 +377,7 @@ class XmlDescriptor(XModuleDescriptor): def export_to_xml(self, resource_fs): """ - Returns an xml string representign this module, and all modules + Returns an xml string representing this module, and all modules underneath it. May also write required resources out to resource_fs Assumes that modules have single parentage (that no module appears twice @@ -372,7 +402,7 @@ class XmlDescriptor(XModuleDescriptor): """Get the value for this attribute that we want to store. (Possible format conversion through an AttrMap). """ - attr_map = self.xml_attribute_map.get(attr, AttrMap()) + attr_map = self.get_map_for_field(attr) return attr_map.to_xml(self._model_data[attr]) # Add the non-inherited metadata diff --git a/common/test/data/full/course.xml b/common/test/data/full/course.xml index b2f9097020..9ee128da1a 100644 --- a/common/test/data/full/course.xml +++ b/common/test/data/full/course.xml @@ -1 +1 @@ - + diff --git a/common/test/data/full/sequential/Administrivia_and_Circuit_Elements.xml b/common/test/data/full/sequential/Administrivia_and_Circuit_Elements.xml index 47b19f75ed..35e4704d7c 100644 --- a/common/test/data/full/sequential/Administrivia_and_Circuit_Elements.xml +++ b/common/test/data/full/sequential/Administrivia_and_Circuit_Elements.xml @@ -1,24 +1,34 @@ - - - - - - S1E4 has been removed… - - - - + + + + + + S1E4 has been removed… + + + + - - - - -

    Inline content…

    - -
    -
    + + + + +

    Inline content…

    + +
    +
    diff --git a/jenkins/test.sh b/jenkins/test.sh index 12f909313f..e5ac4f6f71 100755 --- a/jenkins/test.sh +++ b/jenkins/test.sh @@ -65,6 +65,8 @@ export DJANGO_LIVE_TEST_SERVER_ADDRESS=${DJANGO_LIVE_TEST_SERVER_ADDRESS-localho source /mnt/virtualenvs/"$JOB_NAME"/bin/activate +bundle install + rake install_prereqs rake clobber rake pep8 > pep8.log || cat pep8.log diff --git a/lms/CHANGELOG.md b/lms/CHANGELOG.md deleted file mode 100644 index 0794d379b9..0000000000 --- a/lms/CHANGELOG.md +++ /dev/null @@ -1,18 +0,0 @@ -Instructions -============ -For each pull request, add one or more lines to the bottom of the change list. When -code is released to production, change the `Upcoming` entry to todays date, and add -a new block at the bottom of the file. - - Upcoming - -------- - -Change log entries should be targeted at end users. A good place to start is the -user story that instigated the pull request. - -Changes -======= - -Upcoming --------- -* Created changelog \ No newline at end of file diff --git a/lms/djangoapps/certificates/queue.py b/lms/djangoapps/certificates/queue.py index b4632ce9ab..af1037f903 100644 --- a/lms/djangoapps/certificates/queue.py +++ b/lms/djangoapps/certificates/queue.py @@ -3,6 +3,7 @@ from certificates.models import certificate_status_for_student from certificates.models import CertificateStatuses as status from certificates.models import CertificateWhitelist +from mitxmako.middleware import MakoMiddleware from courseware import grades, courses from django.test.client import RequestFactory from capa.xqueue_interface import XQueueInterface @@ -51,6 +52,14 @@ class XQueueCertInterface(object): """ def __init__(self, request=None): + # MakoMiddleware Note: + # Line below has the side-effect of writing to a module level lookup + # table that will allow problems to render themselves. If this is not + # present, problems that a student hasn't seen will error when loading, + # causing the grading system to under-count the possible score and + # inflate their grade. This dependency is bad and was probably recently + # introduced. This is the bandage until we can trace the root cause. + m = MakoMiddleware() # Get basic auth (username/password) for # xqueue connection if it's in the settings @@ -161,6 +170,10 @@ class XQueueCertInterface(object): cert, created = GeneratedCertificate.objects.get_or_create( user=student, course_id=course_id) + # Needed + self.request.user = student + self.request.session = {} + grade = grades.grade(student, self.request, course) is_whitelisted = self.whitelist.filter( user=student, course_id=course_id, whitelist=True).exists() @@ -211,5 +224,5 @@ class XQueueCertInterface(object): (error, msg) = self.xqueue_interface.send_to_queue( header=xheader, body=json.dumps(contents)) if error: - logger.critical('Unable to add a request to the queue') + logger.critical('Unable to add a request to the queue: {} {}'.format(error, msg)) raise Exception('Unable to send queue message') diff --git a/lms/djangoapps/courseware/access.py b/lms/djangoapps/courseware/access.py index ace9c0096b..07987a8edf 100644 --- a/lms/djangoapps/courseware/access.py +++ b/lms/djangoapps/courseware/access.py @@ -16,6 +16,7 @@ from xmodule.x_module import XModule, XModuleDescriptor from student.models import CourseEnrollmentAllowed from courseware.masquerade import is_masquerading_as_student +from django.utils.timezone import UTC DEBUG_ACCESS = False @@ -133,7 +134,7 @@ def _has_access_course_desc(user, course, action): (staff can always enroll) """ - now = time.gmtime() + now = datetime.now(UTC()) start = course.enrollment_start end = course.enrollment_end @@ -242,7 +243,7 @@ def _has_access_descriptor(user, descriptor, action, course_context=None): # Check start date if descriptor.lms.start is not None: - now = time.gmtime() + now = datetime.now(UTC()) effective_start = _adjust_start_date_for_beta_testers(user, descriptor) if now > effective_start: # after start date, everyone can see it @@ -365,7 +366,7 @@ def _course_org_staff_group_name(location, course_context=None): def group_names_for(role, location, course_context=None): - """Returns the group names for a given role with this location. Plural + """Returns the group names for a given role with this location. Plural because it will return both the name we expect now as well as the legacy group name we support for backwards compatibility. This should not check the DB for existence of a group (like some of its callers do) because that's @@ -483,8 +484,7 @@ def _adjust_start_date_for_beta_testers(user, descriptor): non-None start date. Returns: - A time, in the same format as returned by time.gmtime(). Either the same as - start, or earlier for beta testers. + A datetime. Either the same as start, or earlier for beta testers. NOTE: number of days to adjust should be cached to avoid looking it up thousands of times per query. @@ -505,15 +505,11 @@ def _adjust_start_date_for_beta_testers(user, descriptor): beta_group = course_beta_test_group_name(descriptor.location) if beta_group in user_groups: debug("Adjust start time: user in group %s", beta_group) - # time_structs don't support subtraction, so convert to datetimes, - # subtract, convert back. - # (fun fact: datetime(*a_time_struct[:6]) is the beautiful syntax for - # converting time_structs into datetimes) - start_as_datetime = datetime(*descriptor.lms.start[:6]) + start_as_datetime = descriptor.lms.start delta = timedelta(descriptor.lms.days_early_for_beta) effective = start_as_datetime - delta # ...and back to time_struct - return effective.timetuple() + return effective return descriptor.lms.start @@ -564,7 +560,7 @@ def _has_access_to_location(user, location, access_level, course_context): return True debug("Deny: user not in groups %s", staff_groups) - if access_level == 'instructor' or access_level == 'staff': # instructors get staff privileges + if access_level == 'instructor' or access_level == 'staff': # instructors get staff privileges instructor_groups = group_names_for_instructor(location, course_context) + \ [_course_org_instructor_group_name(location, course_context)] for instructor_group in instructor_groups: diff --git a/lms/djangoapps/courseware/features/navigation.feature b/lms/djangoapps/courseware/features/navigation.feature new file mode 100644 index 0000000000..8fd8b54c1a --- /dev/null +++ b/lms/djangoapps/courseware/features/navigation.feature @@ -0,0 +1,25 @@ +Feature: Navigate Course + As a student in an edX course + In order to view the course properly + I want to be able to navigate through the content + + Scenario: I can navigate to a section + Given I am viewing a course with multiple sections + When I click on section "2" + Then I should see the content of section "2" + + Scenario: I can navigate to subsections + Given I am viewing a section with multiple subsections + When I click on subsection "2" + Then I should see the content of subsection "2" + + Scenario: I can navigate to sequences + Given I am viewing a section with multiple sequences + When I click on sequence "2" + Then I should see the content of sequence "2" + + Scenario: I can go back to where I was after I log out and back in + Given I am viewing a course with multiple sections + When I click on section "2" + And I return later + Then I should see that I was most recently in section "2" diff --git a/lms/djangoapps/courseware/features/navigation.py b/lms/djangoapps/courseware/features/navigation.py new file mode 100644 index 0000000000..edd748e46f --- /dev/null +++ b/lms/djangoapps/courseware/features/navigation.py @@ -0,0 +1,186 @@ +#pylint: disable=C0111 +#pylint: disable=W0621 + +from lettuce import world, step +from django.contrib.auth.models import User +from lettuce.django import django_url +from student.models import CourseEnrollment +from common import course_id, course_location +from problems_setup import PROBLEM_DICT + +TEST_COURSE_ORG = 'edx' +TEST_COURSE_NAME = 'Test Course' +TEST_SECTION_NAME = 'Test Section' +TEST_SUBSECTION_NAME = 'Test Subsection' + + +@step(u'I am viewing a course with multiple sections') +def view_course_multiple_sections(step): + create_course() + # Add a section to the course to contain problems + section1 = world.ItemFactory.create(parent_location=course_location('model_course'), + display_name=section_name(1)) + + # Add a section to the course to contain problems + section2 = world.ItemFactory.create(parent_location=course_location('model_course'), + display_name=section_name(2)) + + place1 = world.ItemFactory.create(parent_location=section1.location, + template='i4x://edx/templates/sequential/Empty', + display_name=subsection_name(1)) + + place2 = world.ItemFactory.create(parent_location=section2.location, + template='i4x://edx/templates/sequential/Empty', + display_name=subsection_name(2)) + + add_problem_to_course_section('model_course', 'multiple choice', place1.location) + add_problem_to_course_section('model_course', 'drop down', place2.location) + + create_user_and_visit_course() + + +@step(u'I am viewing a section with multiple subsections') +def view_course_multiple_subsections(step): + create_course() + + # Add a section to the course to contain problems + section1 = world.ItemFactory.create(parent_location=course_location('model_course'), + display_name=section_name(1)) + + place1 = world.ItemFactory.create(parent_location=section1.location, + template='i4x://edx/templates/sequential/Empty', + display_name=subsection_name(1)) + + place2 = world.ItemFactory.create(parent_location=section1.location, + display_name=subsection_name(2)) + + add_problem_to_course_section('model_course', 'multiple choice', place1.location) + add_problem_to_course_section('model_course', 'drop down', place2.location) + + create_user_and_visit_course() + + +@step(u'I am viewing a section with multiple sequences') +def view_course_multiple_sequences(step): + create_course() + # Add a section to the course to contain problems + section1 = world.ItemFactory.create(parent_location=course_location('model_course'), + display_name=section_name(1)) + + place1 = world.ItemFactory.create(parent_location=section1.location, + template='i4x://edx/templates/sequential/Empty', + display_name=subsection_name(1)) + + add_problem_to_course_section('model_course', 'multiple choice', place1.location) + add_problem_to_course_section('model_course', 'drop down', place1.location) + + create_user_and_visit_course() + + +@step(u'I click on section "([^"]*)"$') +def click_on_section(step, section): + section_css = 'h3[tabindex="-1"]' + world.css_click(section_css) + + subid = "ui-accordion-accordion-panel-" + str(int(section) - 1) + subsection_css = 'ul[id="%s"]> li > a' % subid + #for some reason needed to do it in two steps + world.css_find(subsection_css).click() + + +@step(u'I click on subsection "([^"]*)"$') +def click_on_subsection(step, subsection): + subsection_css = 'ul[id="ui-accordion-accordion-panel-0"]> li > a' + world.css_find(subsection_css)[int(subsection) - 1].click() + + +@step(u'I click on sequence "([^"]*)"$') +def click_on_sequence(step, sequence): + sequence_css = 'a[data-element="%s"]' % sequence + world.css_click(sequence_css) + + +@step(u'I should see the content of (?:sub)?section "([^"]*)"$') +def see_section_content(step, section): + if section == "2": + text = 'The correct answer is Option 2' + elif section == "1": + text = 'The correct answer is Choice 3' + step.given('I should see "' + text + '" somewhere on the page') + + +@step(u'I should see the content of sequence "([^"]*)"$') +def see_sequence_content(step, sequence): + step.given('I should see the content of section "2"') + + +@step(u'I return later') +def return_to_course(step): + step.given('I visit the homepage') + world.click_link("View Course") + world.click_link("Courseware") + + +@step(u'I should see that I was most recently in section "([^"]*)"$') +def see_recent_section(step, section): + step.given('I should see "You were most recently in %s" somewhere on the page' % subsection_name(int(section))) + +##################### +# HELPERS +##################### + + +def section_name(section): + return TEST_SECTION_NAME + str(section) + + +def subsection_name(section): + return TEST_SUBSECTION_NAME + str(section) + + +def create_course(): + world.clear_courses() + + world.CourseFactory.create(org=TEST_COURSE_ORG, + number="model_course", + display_name=TEST_COURSE_NAME) + + +def create_user_and_visit_course(): + world.create_user('robot') + u = User.objects.get(username='robot') + + CourseEnrollment.objects.get_or_create(user=u, course_id=course_id("model_course")) + + world.log_in('robot', 'test') + chapter_name = (TEST_SECTION_NAME + "1").replace(" ", "_") + section_name = (TEST_SUBSECTION_NAME + "1").replace(" ", "_") + url = django_url('/courses/edx/model_course/Test_Course/courseware/%s/%s' % + (chapter_name, section_name)) + + world.browser.visit(url) + + +def add_problem_to_course_section(course, problem_type, parent_location, extraMeta=None): + ''' + Add a problem to the course we have created using factories. + ''' + + assert(problem_type in PROBLEM_DICT) + + # Generate the problem XML using capa.tests.response_xml_factory + factory_dict = PROBLEM_DICT[problem_type] + problem_xml = factory_dict['factory'].build_xml(**factory_dict['kwargs']) + metadata = {'rerandomize': 'always'} if not 'metadata' in factory_dict else factory_dict['metadata'] + if extraMeta: + metadata = dict(metadata, **extraMeta) + + # Create a problem item using our generated XML + # We set rerandomize=always in the metadata so that the "Reset" button + # will appear. + template_name = "i4x://edx/templates/problem/Blank_Common_Problem" + world.ItemFactory.create(parent_location=parent_location, + template=template_name, + display_name=str(problem_type), + data=problem_xml, + metadata=metadata) diff --git a/lms/djangoapps/courseware/features/videoalpha.feature b/lms/djangoapps/courseware/features/videoalpha.feature new file mode 100644 index 0000000000..2a0acb0f9b --- /dev/null +++ b/lms/djangoapps/courseware/features/videoalpha.feature @@ -0,0 +1,6 @@ +Feature: Video Alpha component + As a student, I want to view course videos in LMS. + + Scenario: Autoplay is enabled in LMS + Given the course has a Video component + Then when I view the video it has autoplay enabled diff --git a/lms/djangoapps/courseware/features/videoalpha.py b/lms/djangoapps/courseware/features/videoalpha.py new file mode 100644 index 0000000000..cabf8c681f --- /dev/null +++ b/lms/djangoapps/courseware/features/videoalpha.py @@ -0,0 +1,36 @@ +#pylint: disable=C0111 +#pylint: disable=W0613 +#pylint: disable=W0621 + +from lettuce import world, step +from lettuce.django import django_url +from common import TEST_COURSE_NAME, TEST_SECTION_NAME, i_am_registered_for_the_course, section_location + +############### ACTIONS #################### + + +@step('when I view the video it has autoplay enabled') +def does_autoplay(step): + assert(world.css_find('.videoalpha')[0]['data-autoplay'] == 'True') + + +@step('the course has a Video component') +def view_videoalpha(step): + coursename = TEST_COURSE_NAME.replace(' ', '_') + i_am_registered_for_the_course(step, coursename) + + # Make sure we have a videoalpha + add_videoalpha_to_course(coursename) + chapter_name = TEST_SECTION_NAME.replace(" ", "_") + section_name = chapter_name + url = django_url('/courses/edx/Test_Course/Test_Course/courseware/%s/%s' % + (chapter_name, section_name)) + + world.browser.visit(url) + + +def add_videoalpha_to_course(course): + template_name = 'i4x://edx/templates/videoalpha/default' + world.ItemFactory.create(parent_location=section_location(course), + template=template_name, + display_name='Video Alpha 1') diff --git a/lms/djangoapps/courseware/grades.py b/lms/djangoapps/courseware/grades.py index ae386f1528..e3c40079c3 100644 --- a/lms/djangoapps/courseware/grades.py +++ b/lms/djangoapps/courseware/grades.py @@ -364,7 +364,7 @@ def get_score(course_id, user, problem_descriptor, module_creator, model_data_ca else: return (None, None) - if not (problem_descriptor.stores_state and problem_descriptor.has_score): + if not problem_descriptor.has_score: # These are not problems, and do not have a score return (None, None) diff --git a/lms/djangoapps/courseware/models.py b/lms/djangoapps/courseware/models.py index 53493b8e45..79f1534f41 100644 --- a/lms/djangoapps/courseware/models.py +++ b/lms/djangoapps/courseware/models.py @@ -4,9 +4,9 @@ WE'RE USING MIGRATIONS! If you make changes to this model, be sure to create an appropriate migration file and check it in at the same time as your model changes. To do that, -1. Go to the mitx dir +1. Go to the edx-platform dir 2. ./manage.py schemamigration courseware --auto description_of_your_change -3. Add the migration file created in mitx/courseware/migrations/ +3. Add the migration file created in edx-platform/lms/djangoapps/courseware/migrations/ ASSUMPTIONS: modules have unique IDs, even across different module_types @@ -17,6 +17,7 @@ from django.db import models from django.db.models.signals import post_save from django.dispatch import receiver + class StudentModule(models.Model): """ Keeps student state for a particular module in a particular course. diff --git a/lms/djangoapps/courseware/module_render.py b/lms/djangoapps/courseware/module_render.py index 2ae7bcdc1f..0a44540577 100644 --- a/lms/djangoapps/courseware/module_render.py +++ b/lms/djangoapps/courseware/module_render.py @@ -121,7 +121,7 @@ def toc_for_course(user, request, course, active_chapter, active_section, model_ def get_module(user, request, location, model_data_cache, course_id, - position=None, not_found_ok = False, wrap_xmodule_display=True, + position=None, not_found_ok=False, wrap_xmodule_display=True, grade_bucket_type=None, depth=0): """ Get an instance of the xmodule class identified by location, @@ -161,16 +161,49 @@ def get_module(user, request, location, model_data_cache, course_id, return None -def get_module_for_descriptor(user, request, descriptor, model_data_cache, course_id, - position=None, wrap_xmodule_display=True, grade_bucket_type=None): - """ - Actually implement get_module. See docstring there for details. +def get_xqueue_callback_url_prefix(request): """ + Calculates default prefix based on request, but allows override via settings + This is separated from get_module_for_descriptor so that it can be called + by the LMS before submitting background tasks to run. The xqueue callbacks + should go back to the LMS, not to the worker. + """ + prefix = '{proto}://{host}'.format( + proto=request.META.get('HTTP_X_FORWARDED_PROTO', 'https' if request.is_secure() else 'http'), + host=request.get_host() + ) + return settings.XQUEUE_INTERFACE.get('callback_url', prefix) + + +def get_module_for_descriptor(user, request, descriptor, model_data_cache, course_id, + position=None, wrap_xmodule_display=True, grade_bucket_type=None): + """ + Implements get_module, extracting out the request-specific functionality. + + See get_module() docstring for further details. + """ # allow course staff to masquerade as student if has_access(user, descriptor, 'staff', course_id): setup_masquerade(request, True) + track_function = make_track_function(request) + xqueue_callback_url_prefix = get_xqueue_callback_url_prefix(request) + + return get_module_for_descriptor_internal(user, descriptor, model_data_cache, course_id, + track_function, xqueue_callback_url_prefix, + position, wrap_xmodule_display, grade_bucket_type) + + +def get_module_for_descriptor_internal(user, descriptor, model_data_cache, course_id, + track_function, xqueue_callback_url_prefix, + position=None, wrap_xmodule_display=True, grade_bucket_type=None): + """ + Actually implement get_module, without requiring a request. + + See get_module() docstring for further details. + """ + # Short circuit--if the user shouldn't have access, bail without doing any work if not has_access(user, descriptor, 'load', course_id): return None @@ -186,19 +219,13 @@ def get_module_for_descriptor(user, request, descriptor, model_data_cache, cours def make_xqueue_callback(dispatch='score_update'): # Fully qualified callback URL for external queueing system - xqueue_callback_url = '{proto}://{host}'.format( - host=request.get_host(), - proto=request.META.get('HTTP_X_FORWARDED_PROTO', 'https' if request.is_secure() else 'http') - ) - xqueue_callback_url = settings.XQUEUE_INTERFACE.get('callback_url',xqueue_callback_url) # allow override - - xqueue_callback_url += reverse('xqueue_callback', - kwargs=dict(course_id=course_id, - userid=str(user.id), - id=descriptor.location.url(), - dispatch=dispatch), - ) - return xqueue_callback_url + relative_xqueue_callback_url = reverse('xqueue_callback', + kwargs=dict(course_id=course_id, + userid=str(user.id), + id=descriptor.location.url(), + dispatch=dispatch), + ) + return xqueue_callback_url_prefix + relative_xqueue_callback_url # Default queuename is course-specific and is derived from the course that # contains the current module. @@ -211,20 +238,20 @@ def get_module_for_descriptor(user, request, descriptor, model_data_cache, cours 'waittime': settings.XQUEUE_WAITTIME_BETWEEN_REQUESTS } - #This is a hacky way to pass settings to the combined open ended xmodule - #It needs an S3 interface to upload images to S3 - #It needs the open ended grading interface in order to get peer grading to be done - #this first checks to see if the descriptor is the correct one, and only sends settings if it is + # This is a hacky way to pass settings to the combined open ended xmodule + # It needs an S3 interface to upload images to S3 + # It needs the open ended grading interface in order to get peer grading to be done + # this first checks to see if the descriptor is the correct one, and only sends settings if it is - #Get descriptor metadata fields indicating needs for various settings + # Get descriptor metadata fields indicating needs for various settings needs_open_ended_interface = getattr(descriptor, "needs_open_ended_interface", False) needs_s3_interface = getattr(descriptor, "needs_s3_interface", False) - #Initialize interfaces to None + # Initialize interfaces to None open_ended_grading_interface = None s3_interface = None - #Create interfaces if needed + # Create interfaces if needed if needs_open_ended_interface: open_ended_grading_interface = settings.OPEN_ENDED_GRADING_INTERFACE open_ended_grading_interface['mock_peer_grading'] = settings.MOCK_PEER_GRADING @@ -238,10 +265,15 @@ def get_module_for_descriptor(user, request, descriptor, model_data_cache, cours def inner_get_module(descriptor): """ - Delegate to get_module. It does an access check, so may return None + Delegate to get_module_for_descriptor_internal() with all values except `descriptor` set. + + Because it does an access check, it may return None. """ - return get_module_for_descriptor(user, request, descriptor, - model_data_cache, course_id, position) + # TODO: fix this so that make_xqueue_callback uses the descriptor passed into + # inner_get_module, not the parent's callback. Add it as an argument.... + return get_module_for_descriptor_internal(user, descriptor, model_data_cache, course_id, + track_function, make_xqueue_callback, + position, wrap_xmodule_display, grade_bucket_type) def xblock_model_data(descriptor): return DbModel( @@ -291,7 +323,7 @@ def get_module_for_descriptor(user, request, descriptor, model_data_cache, cours # TODO (cpennington): When modules are shared between courses, the static # prefix is going to have to be specific to the module, not the directory # that the xml was loaded from - system = ModuleSystem(track_function=make_track_function(request), + system = ModuleSystem(track_function=track_function, render_template=render_to_string, ajax_url=ajax_url, xqueue=xqueue, diff --git a/lms/djangoapps/courseware/tests/__init__.py b/lms/djangoapps/courseware/tests/__init__.py index dc61239c36..1cb403018c 100644 --- a/lms/djangoapps/courseware/tests/__init__.py +++ b/lms/djangoapps/courseware/tests/__init__.py @@ -25,8 +25,8 @@ class BaseTestXmodule(ModuleStoreTestCase): """Base class for testing Xmodules with mongo store. This class prepares course and users for tests: - 1. create test course - 2. create, enrol and login users for this course + 1. create test course; + 2. create, enrol and login users for this course; Any xmodule should overwrite only next parameters for test: 1. TEMPLATE_NAME @@ -77,14 +77,15 @@ class BaseTestXmodule(ModuleStoreTestCase): data=self.DATA ) - location = self.item_descriptor.location system = test_system() system.render_template = lambda template, context: context + model_data = {'location': self.item_descriptor.location} + model_data.update(self.MODEL_DATA) self.item_module = self.item_descriptor.module_class( - system, location, self.item_descriptor, self.MODEL_DATA + system, self.item_descriptor, model_data ) - self.item_url = Location(location).url() + self.item_url = Location(self.item_module.location).url() # login all users for acces to Xmodule self.clients = {user.username: Client() for user in self.users} diff --git a/lms/djangoapps/courseware/tests/factories.py b/lms/djangoapps/courseware/tests/factories.py index af33ba1211..26df68ca7e 100644 --- a/lms/djangoapps/courseware/tests/factories.py +++ b/lms/djangoapps/courseware/tests/factories.py @@ -12,6 +12,7 @@ from courseware.models import StudentModule, XModuleContentField, XModuleSetting from courseware.models import XModuleStudentInfoField, XModuleStudentPrefsField from xmodule.modulestore import Location +from pytz import UTC location = partial(Location, 'i4x', 'edX', 'test_course', 'problem') @@ -28,8 +29,8 @@ class RegistrationFactory(StudentRegistrationFactory): class UserFactory(StudentUserFactory): email = 'robot@edx.org' last_name = 'Tester' - last_login = datetime.now() - date_joined = datetime.now() + last_login = datetime.now(UTC) + date_joined = datetime.now(UTC) class GroupFactory(StudentGroupFactory): diff --git a/lms/djangoapps/courseware/tests/test_access.py b/lms/djangoapps/courseware/tests/test_access.py index c1bb9f203e..34d064971f 100644 --- a/lms/djangoapps/courseware/tests/test_access.py +++ b/lms/djangoapps/courseware/tests/test_access.py @@ -1,18 +1,12 @@ -import unittest -import logging -import time -from mock import Mock, MagicMock, patch +from mock import Mock, patch -from django.conf import settings from django.test import TestCase -from xmodule.course_module import CourseDescriptor -from xmodule.error_module import ErrorDescriptor from xmodule.modulestore import Location -from xmodule.timeparse import parse_time -from xmodule.x_module import XModule, XModuleDescriptor import courseware.access as access from .factories import CourseEnrollmentAllowedFactory +import datetime +from django.utils.timezone import UTC class AccessTestCase(TestCase): @@ -77,7 +71,7 @@ class AccessTestCase(TestCase): # TODO: override DISABLE_START_DATES and test the start date branch of the method u = Mock() d = Mock() - d.start = time.gmtime(time.time() - 86400) # make sure the start time is in the past + d.start = datetime.datetime.now(UTC()) - datetime.timedelta(days=1) # make sure the start time is in the past # Always returns true because DISABLE_START_DATES is set in test.py self.assertTrue(access._has_access_descriptor(u, d, 'load')) @@ -85,8 +79,8 @@ class AccessTestCase(TestCase): def test__has_access_course_desc_can_enroll(self): u = Mock() - yesterday = time.gmtime(time.time() - 86400) - tomorrow = time.gmtime(time.time() + 86400) + yesterday = datetime.datetime.now(UTC()) - datetime.timedelta(days=1) + tomorrow = datetime.datetime.now(UTC()) + datetime.timedelta(days=1) c = Mock(enrollment_start=yesterday, enrollment_end=tomorrow) # User can enroll if it is between the start and end dates diff --git a/lms/djangoapps/courseware/tests/test_model_data.py b/lms/djangoapps/courseware/tests/test_model_data.py index 0966fb1aeb..9f225f73bd 100644 --- a/lms/djangoapps/courseware/tests/test_model_data.py +++ b/lms/djangoapps/courseware/tests/test_model_data.py @@ -26,7 +26,6 @@ def mock_field(scope, name): def mock_descriptor(fields=[], lms_fields=[]): descriptor = Mock() - descriptor.stores_state = True descriptor.location = location('def_id') descriptor.module_class.fields = fields descriptor.module_class.lms.fields = lms_fields diff --git a/lms/djangoapps/courseware/tests/test_tabs.py b/lms/djangoapps/courseware/tests/test_tabs.py index 04c46a7820..e8d57f34af 100644 --- a/lms/djangoapps/courseware/tests/test_tabs.py +++ b/lms/djangoapps/courseware/tests/test_tabs.py @@ -11,21 +11,22 @@ from courseware.tests.tests import TEST_DATA_MONGO_MODULESTORE from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase from xmodule.modulestore.tests.factories import CourseFactory + class ProgressTestCase(TestCase): - def setUp(self): + def setUp(self): - self.mockuser1 = MagicMock() - self.mockuser0 = MagicMock() - self.course = MagicMock() - self.mockuser1.is_authenticated.return_value = True - self.mockuser0.is_authenticated.return_value = False - self.course.id = 'edX/full/6.002_Spring_2012' - self.tab = {'name': 'same'} - self.active_page1 = 'progress' - self.active_page0 = 'stagnation' + self.mockuser1 = MagicMock() + self.mockuser0 = MagicMock() + self.course = MagicMock() + self.mockuser1.is_authenticated.return_value = True + self.mockuser0.is_authenticated.return_value = False + self.course.id = 'edX/full/6.002_Spring_2012' + self.tab = {'name': 'same'} + self.active_page1 = 'progress' + self.active_page0 = 'stagnation' - def test_progress(self): + def test_progress(self): self.assertEqual(tabs._progress(self.tab, self.mockuser0, self.course, self.active_page0), []) @@ -34,8 +35,8 @@ class ProgressTestCase(TestCase): self.active_page1)[0].name, 'same') self.assertEqual(tabs._progress(self.tab, self.mockuser1, self.course, - self.active_page1)[0].link, - reverse('progress', args = [self.course.id])) + self.active_page1)[0].link, + reverse('progress', args=[self.course.id])) self.assertEqual(tabs._progress(self.tab, self.mockuser1, self.course, self.active_page0)[0].is_active, False) @@ -63,15 +64,15 @@ class WikiTestCase(TestCase): 'same') self.assertEqual(tabs._wiki(self.tab, self.user, - self.course, self.active_page1)[0].link, + self.course, self.active_page1)[0].link, reverse('course_wiki', args=[self.course.id])) self.assertEqual(tabs._wiki(self.tab, self.user, - self.course, self.active_page1)[0].is_active, + self.course, self.active_page1)[0].is_active, True) self.assertEqual(tabs._wiki(self.tab, self.user, - self.course, self.active_page0)[0].is_active, + self.course, self.active_page0)[0].is_active, False) @override_settings(WIKI_ENABLED=False) @@ -129,14 +130,13 @@ class StaticTabTestCase(TestCase): self.assertEqual(tabs._static_tab(self.tabby, self.user, self.course, self.active_page1)[0].link, - reverse('static_tab', args = [self.course.id, - self.tabby['url_slug']])) + reverse('static_tab', args=[self.course.id, + self.tabby['url_slug']])) self.assertEqual(tabs._static_tab(self.tabby, self.user, self.course, self.active_page1)[0].is_active, True) - self.assertEqual(tabs._static_tab(self.tabby, self.user, self.course, self.active_page0)[0].is_active, False) @@ -183,7 +183,7 @@ class TextbooksTestCase(TestCase): self.assertEqual(tabs._textbooks(self.tab, self.mockuser1, self.course, self.active_page1)[1].name, - 'Topology') + 'Topology') self.assertEqual(tabs._textbooks(self.tab, self.mockuser1, self.course, self.active_page1)[1].link, @@ -206,6 +206,7 @@ class TextbooksTestCase(TestCase): self.assertEqual(tabs._textbooks(self.tab, self.mockuser0, self.course, self.active_pageX), []) + class KeyCheckerTestCase(TestCase): def setUp(self): @@ -223,39 +224,36 @@ class KeyCheckerTestCase(TestCase): class NullValidatorTestCase(TestCase): - def setUp(self): + def setUp(self): - self.d = {} + self.dummy = {} - def test_null_validator(self): - - self.assertIsNone(tabs.null_validator(self.d)) + def test_null_validator(self): + self.assertIsNone(tabs.null_validator(self.dummy)) class ValidateTabsTestCase(TestCase): def setUp(self): - self.courses = [MagicMock() for i in range(0,5)] + self.courses = [MagicMock() for i in range(0, 5)] self.courses[0].tabs = None - self.courses[1].tabs = [{'type':'courseware'}, {'type': 'fax'}] + self.courses[1].tabs = [{'type': 'courseware'}, {'type': 'fax'}] - self.courses[2].tabs = [{'type':'shadow'}, {'type': 'course_info'}] + self.courses[2].tabs = [{'type': 'shadow'}, {'type': 'course_info'}] - self.courses[3].tabs = [{'type':'courseware'},{'type':'course_info', 'name': 'alice'}, - {'type': 'wiki', 'name':'alice'}, {'type':'discussion', 'name': 'alice'}, - {'type':'external_link', 'name': 'alice', 'link':'blink'}, - {'type':'textbooks'}, {'type':'progress', 'name': 'alice'}, - {'type':'static_tab', 'name':'alice', 'url_slug':'schlug'}, - {'type': 'staff_grading'}] - - self.courses[4].tabs = [{'type':'courseware'},{'type': 'course_info'}, {'type': 'flying'}] + self.courses[3].tabs = [{'type': 'courseware'}, {'type': 'course_info', 'name': 'alice'}, + {'type': 'wiki', 'name': 'alice'}, {'type': 'discussion', 'name': 'alice'}, + {'type': 'external_link', 'name': 'alice', 'link': 'blink'}, + {'type': 'textbooks'}, {'type': 'progress', 'name': 'alice'}, + {'type': 'static_tab', 'name': 'alice', 'url_slug': 'schlug'}, + {'type': 'staff_grading'}] + self.courses[4].tabs = [{'type': 'courseware'}, {'type': 'course_info'}, {'type': 'flying'}] def test_validate_tabs(self): - self.assertIsNone(tabs.validate_tabs(self.courses[0])) self.assertRaises(tabs.InvalidTabsException, tabs.validate_tabs, self.courses[1]) self.assertRaises(tabs.InvalidTabsException, tabs.validate_tabs, self.courses[2]) @@ -268,15 +266,15 @@ class DiscussionLinkTestCase(ModuleStoreTestCase): def setUp(self): self.tabs_with_discussion = [ - {'type':'courseware'}, - {'type':'course_info'}, - {'type':'discussion'}, - {'type':'textbooks'}, + {'type': 'courseware'}, + {'type': 'course_info'}, + {'type': 'discussion'}, + {'type': 'textbooks'}, ] self.tabs_without_discussion = [ - {'type':'courseware'}, - {'type':'course_info'}, - {'type':'textbooks'}, + {'type': 'courseware'}, + {'type': 'course_info'}, + {'type': 'textbooks'}, ] @staticmethod diff --git a/lms/djangoapps/courseware/tests/test_video_mongo.py b/lms/djangoapps/courseware/tests/test_video_mongo.py index c041ccc151..a0fdecc77a 100644 --- a/lms/djangoapps/courseware/tests/test_video_mongo.py +++ b/lms/djangoapps/courseware/tests/test_video_mongo.py @@ -15,8 +15,8 @@ class TestVideo(BaseTestXmodule): user.username: self.clients[user.username].post( self.get_url('whatever'), {}, - HTTP_X_REQUESTED_WITH='XMLHttpRequest') - for user in self.users + HTTP_X_REQUESTED_WITH='XMLHttpRequest' + ) for user in self.users } self.assertEqual( diff --git a/lms/djangoapps/courseware/tests/test_videoalpha_mongo.py b/lms/djangoapps/courseware/tests/test_videoalpha_mongo.py new file mode 100644 index 0000000000..a6bff60acf --- /dev/null +++ b/lms/djangoapps/courseware/tests/test_videoalpha_mongo.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +"""Video xmodule tests in mongo.""" + +from . import BaseTestXmodule +from .test_videoalpha_xml import SOURCE_XML +from django.conf import settings + + +class TestVideo(BaseTestXmodule): + """Integration tests: web client + mongo.""" + + TEMPLATE_NAME = "i4x://edx/templates/videoalpha/Video_Alpha" + DATA = SOURCE_XML + MODEL_DATA = { + 'data': DATA + } + + def test_handle_ajax_dispatch(self): + responses = { + user.username: self.clients[user.username].post( + self.get_url('whatever'), + {}, + HTTP_X_REQUESTED_WITH='XMLHttpRequest') + for user in self.users + } + + self.assertEqual( + set([ + response.status_code + for _, response in responses.items() + ]).pop(), + 404) + + def test_videoalpha_constructor(self): + """Make sure that all parameters extracted correclty from xml""" + + # `get_html` return only context, cause we + # overwrite `system.render_template` + context = self.item_module.get_html() + expected_context = { + 'data_dir': getattr(self, 'data_dir', None), + 'caption_asset_path': '/c4x/MITx/999/asset/subs_', + 'show_captions': self.item_module.show_captions, + 'display_name': self.item_module.display_name_with_default, + 'end': self.item_module.end_time, + 'id': self.item_module.location.html_id(), + 'sources': self.item_module.sources, + 'start': self.item_module.start_time, + 'sub': self.item_module.sub, + 'track': self.item_module.track, + 'youtube_streams': self.item_module.youtube_streams, + 'autoplay': settings.MITX_FEATURES.get('AUTOPLAY_VIDEOS', True) + } + self.assertDictEqual(context, expected_context) diff --git a/lms/djangoapps/courseware/tests/test_videoalpha_xml.py b/lms/djangoapps/courseware/tests/test_videoalpha_xml.py new file mode 100644 index 0000000000..44e0a7811a --- /dev/null +++ b/lms/djangoapps/courseware/tests/test_videoalpha_xml.py @@ -0,0 +1,129 @@ +# -*- coding: utf-8 -*- +"""Test for VideoAlpha Xmodule functional logic. +These tests data readed from xml, not from mongo. + +We have a ModuleStoreTestCase class defined in +common/lib/xmodule/xmodule/modulestore/tests/django_utils.py. +You can search for usages of this in the cms and lms tests for examples. +You use this so that it will do things like point the modulestore +setting to mongo, flush the contentstore before and after, load the +templates, etc. +You can then use the CourseFactory and XModuleItemFactory as defined in +common/lib/xmodule/xmodule/modulestore/tests/factories.py to create the +course, section, subsection, unit, etc. +""" + +import json +import unittest +from mock import Mock +from lxml import etree + +from django.conf import settings + +from xmodule.videoalpha_module import VideoAlphaDescriptor, VideoAlphaModule +from xmodule.modulestore import Location +from xmodule.tests import test_system +from xmodule.tests.test_logic import LogicTest + + +SOURCE_XML = """ + + + + + +""" + + +class VideoAlphaFactory(object): + """A helper class to create videoalpha modules with various parameters + for testing. + """ + + # tag that uses youtube videos + sample_problem_xml_youtube = SOURCE_XML + + @staticmethod + def create(): + """Method return VideoAlpha Xmodule instance.""" + location = Location(["i4x", "edX", "videoalpha", "default", + "SampleProblem1"]) + model_data = {'data': VideoAlphaFactory.sample_problem_xml_youtube} + + descriptor = Mock(weight="1") + + system = test_system() + system.render_template = lambda template, context: context + VideoAlphaModule.location = location + module = VideoAlphaModule(system, descriptor, model_data) + + return module + + +class VideoAlphaModuleTest(LogicTest): + """Tests for logic of VideoAlpha Xmodule.""" + + descriptor_class = VideoAlphaDescriptor + + raw_model_data = { + 'data': '' + } + + def test_get_timeframe_no_parameters(self): + xmltree = etree.fromstring('test') + output = self.xmodule.get_timeframe(xmltree) + self.assertEqual(output, ('', '')) + + def test_get_timeframe_with_one_parameter(self): + xmltree = etree.fromstring( + 'test' + ) + output = self.xmodule.get_timeframe(xmltree) + self.assertEqual(output, (247, '')) + + def test_get_timeframe_with_two_parameters(self): + xmltree = etree.fromstring( + '''test''' + ) + output = self.xmodule.get_timeframe(xmltree) + self.assertEqual(output, (247, 47079)) + + +class VideoAlphaModuleUnitTest(unittest.TestCase): + """Unit tests for VideoAlpha Xmodule.""" + + def test_videoalpha_constructor(self): + """Make sure that all parameters extracted correclty from xml""" + module = VideoAlphaFactory.create() + + # `get_html` return only context, cause we + # overwrite `system.render_template` + context = module.get_html() + expected_context = { + 'caption_asset_path': '/static/subs/', + 'sub': module.sub, + 'data_dir': getattr(self, 'data_dir', None), + 'display_name': module.display_name_with_default, + 'end': module.end_time, + 'start': module.start_time, + 'id': module.location.html_id(), + 'show_captions': module.show_captions, + 'sources': module.sources, + 'youtube_streams': module.youtube_streams, + 'track': module.track, + 'autoplay': settings.MITX_FEATURES.get('AUTOPLAY_VIDEOS', True) + } + self.assertDictEqual(context, expected_context) + + self.assertDictEqual( + json.loads(module.get_instance_state()), + {'position': 0}) diff --git a/lms/djangoapps/courseware/tests/test_views.py b/lms/djangoapps/courseware/tests/test_views.py index 1d3166893e..25492ad379 100644 --- a/lms/djangoapps/courseware/tests/test_views.py +++ b/lms/djangoapps/courseware/tests/test_views.py @@ -13,6 +13,7 @@ from xmodule.modulestore.django import modulestore import courseware.views as views from xmodule.modulestore import Location +from pytz import UTC class Stub(): @@ -63,7 +64,7 @@ class ViewsTestCase(TestCase): def setUp(self): self.user = User.objects.create(username='dummy', password='123456', email='test@mit.edu') - self.date = datetime.datetime(2013, 1, 22) + self.date = datetime.datetime(2013, 1, 22, tzinfo=UTC) self.course_id = 'edX/toy/2012_Fall' self.enrollment = CourseEnrollment.objects.get_or_create(user=self.user, course_id=self.course_id, diff --git a/lms/djangoapps/courseware/tests/tests.py b/lms/djangoapps/courseware/tests/tests.py index ec3e55b1b8..056a73e7c8 100644 --- a/lms/djangoapps/courseware/tests/tests.py +++ b/lms/djangoapps/courseware/tests/tests.py @@ -3,7 +3,6 @@ Test for lms courseware app ''' import logging import json -import time import random from urlparse import urlsplit, urlunsplit @@ -30,6 +29,8 @@ from xmodule.modulestore.django import modulestore from xmodule.modulestore import Location from xmodule.modulestore.xml_importer import import_from_xml from xmodule.modulestore.xml import XMLModuleStore +import datetime +from django.utils.timezone import UTC log = logging.getLogger("mitx." + __name__) @@ -64,7 +65,7 @@ def mongo_store_config(data_dir): 'db': 'test_xmodule', 'collection': 'modulestore_%s' % uuid4().hex, 'fs_root': data_dir, - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } } } @@ -287,7 +288,7 @@ class PageLoaderTestCase(LoginEnrollmentTestCase): ''' Choose a page in the course randomly, and assert that it loads ''' - # enroll in the course before trying to access pages + # enroll in the course before trying to access pages courses = module_store.get_courses() self.assertEqual(len(courses), 1) course = courses[0] @@ -603,9 +604,9 @@ class TestViewAuth(LoginEnrollmentTestCase): """Actually do the test, relying on settings to be right.""" # Make courses start in the future - tomorrow = time.time() + 24 * 3600 - self.toy.lms.start = time.gmtime(tomorrow) - self.full.lms.start = time.gmtime(tomorrow) + tomorrow = datetime.datetime.now(UTC()) + datetime.timedelta(days=1) + self.toy.lms.start = tomorrow + self.full.lms.start = tomorrow self.assertFalse(self.toy.has_started()) self.assertFalse(self.full.has_started()) @@ -728,18 +729,18 @@ class TestViewAuth(LoginEnrollmentTestCase): """Actually do the test, relying on settings to be right.""" # Make courses start in the future - tomorrow = time.time() + 24 * 3600 - nextday = tomorrow + 24 * 3600 - yesterday = time.time() - 24 * 3600 + tomorrow = datetime.datetime.now(UTC()) + datetime.timedelta(days=1) + nextday = tomorrow + datetime.timedelta(days=1) + yesterday = datetime.datetime.now(UTC()) - datetime.timedelta(days=1) print "changing" # toy course's enrollment period hasn't started - self.toy.enrollment_start = time.gmtime(tomorrow) - self.toy.enrollment_end = time.gmtime(nextday) + self.toy.enrollment_start = tomorrow + self.toy.enrollment_end = nextday # full course's has - self.full.enrollment_start = time.gmtime(yesterday) - self.full.enrollment_end = time.gmtime(tomorrow) + self.full.enrollment_start = yesterday + self.full.enrollment_end = tomorrow print "login" # First, try with an enrolled student @@ -778,12 +779,10 @@ class TestViewAuth(LoginEnrollmentTestCase): self.assertFalse(settings.MITX_FEATURES['DISABLE_START_DATES']) # Make courses start in the future - tomorrow = time.time() + 24 * 3600 - # nextday = tomorrow + 24 * 3600 - # yesterday = time.time() - 24 * 3600 + tomorrow = datetime.datetime.now(UTC()) + datetime.timedelta(days=1) # toy course's hasn't started - self.toy.lms.start = time.gmtime(tomorrow) + self.toy.lms.start = tomorrow self.assertFalse(self.toy.has_started()) # but should be accessible for beta testers @@ -854,7 +853,7 @@ class TestSubmittingProblems(LoginEnrollmentTestCase): modx_url = self.modx_url(problem_location, 'problem_check') answer_key_prefix = 'input_i4x-edX-{}-problem-{}_'.format(self.course_slug, problem_url_name) resp = self.client.post(modx_url, - { (answer_key_prefix + k): v for k,v in responses.items() } + { (answer_key_prefix + k): v for k, v in responses.items() } ) return resp @@ -925,7 +924,7 @@ class TestCourseGrader(TestSubmittingProblems): # Only get half of the first problem correct self.submit_question_answer('H1P1', {'2_1': 'Correct', '2_2': 'Incorrect'}) self.check_grade_percent(0.06) - self.assertEqual(earned_hw_scores(), [1.0, 0, 0]) # Order matters + self.assertEqual(earned_hw_scores(), [1.0, 0, 0]) # Order matters self.assertEqual(score_for_hw('Homework1'), [1.0, 0.0]) # Get both parts of the first problem correct @@ -958,16 +957,16 @@ class TestCourseGrader(TestSubmittingProblems): # Third homework self.submit_question_answer('H3P1', {'2_1': 'Correct', '2_2': 'Correct'}) - self.check_grade_percent(0.42) # Score didn't change + self.check_grade_percent(0.42) # Score didn't change self.assertEqual(earned_hw_scores(), [4.0, 4.0, 2.0]) self.submit_question_answer('H3P2', {'2_1': 'Correct', '2_2': 'Correct'}) - self.check_grade_percent(0.5) # Now homework2 dropped. Score changes + self.check_grade_percent(0.5) # Now homework2 dropped. Score changes self.assertEqual(earned_hw_scores(), [4.0, 4.0, 4.0]) # Now we answer the final question (worth half of the grade) self.submit_question_answer('FinalQuestion', {'2_1': 'Correct', '2_2': 'Correct'}) - self.check_grade_percent(1.0) # Hooray! We got 100% + self.check_grade_percent(1.0) # Hooray! We got 100% @override_settings(MODULESTORE=TEST_DATA_XML_MODULESTORE) @@ -1000,7 +999,7 @@ class TestSchematicResponse(TestSubmittingProblems): { '2_1': json.dumps( [['transient', {'Z': [ [0.0000004, 2.8], - [0.0000009, 0.0], # wrong. + [0.0000009, 0.0], # wrong. [0.0000014, 2.8], [0.0000019, 2.8], [0.0000024, 2.8], diff --git a/lms/djangoapps/django_comment_client/base/tests.py b/lms/djangoapps/django_comment_client/base/tests.py index 3e06402ddd..aa5b657bd6 100644 --- a/lms/djangoapps/django_comment_client/base/tests.py +++ b/lms/djangoapps/django_comment_client/base/tests.py @@ -1,5 +1,6 @@ import logging +from django.conf import settings from django.test.utils import override_settings from django.test.client import Client from django.contrib.auth.models import User @@ -8,6 +9,7 @@ from xmodule.modulestore.tests.factories import CourseFactory from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase from django.core.urlresolvers import reverse from django.core.management import call_command +from util.testing import UrlResetMixin from courseware.tests.tests import TEST_DATA_MONGO_MODULESTORE from nose.tools import assert_true, assert_equal @@ -18,8 +20,19 @@ log = logging.getLogger(__name__) @override_settings(MODULESTORE=TEST_DATA_MONGO_MODULESTORE) @patch('comment_client.utils.requests.request') -class ViewsTestCase(ModuleStoreTestCase): +class ViewsTestCase(UrlResetMixin, ModuleStoreTestCase): def setUp(self): + + # This feature affects the contents of urls.py, so we change + # it before the call to super.setUp() which reloads urls.py (because + # of the UrlResetMixin) + + # This setting is cleaned up at the end of the test by @override_settings, which + # restores all of the old settings + settings.MITX_FEATURES['ENABLE_DISCUSSION_SERVICE'] = True + + super(ViewsTestCase, self).setUp() + # create a course self.course = CourseFactory.create(org='MITx', course='999', display_name='Robot Super Course') diff --git a/lms/djangoapps/django_comment_client/utils.py b/lms/djangoapps/django_comment_client/utils.py index 276956f0e9..496c834950 100644 --- a/lms/djangoapps/django_comment_client/utils.py +++ b/lms/djangoapps/django_comment_client/utils.py @@ -1,14 +1,9 @@ -import time +import pytz from collections import defaultdict import logging -import time import urllib from datetime import datetime -from courseware.module_render import get_module -from xmodule.modulestore import Location -from xmodule.modulestore.django import modulestore -from xmodule.modulestore.search import path_to_location from django.contrib.auth.models import User from django.core.urlresolvers import reverse from django.db import connection @@ -16,13 +11,12 @@ from django.http import HttpResponse from django.utils import simplejson from django_comment_common.models import Role from django_comment_client.permissions import check_permissions_by_view -from xmodule.modulestore.exceptions import NoPathToItem from mitxmako import middleware import pystache_custom as pystache -from xmodule.modulestore import Location from xmodule.modulestore.django import modulestore +from django.utils.timezone import UTC log = logging.getLogger(__name__) @@ -100,7 +94,7 @@ def get_discussion_category_map(course): def filter_unstarted_categories(category_map): - now = time.gmtime() + now = datetime.now(UTC()) result_map = {} @@ -175,7 +169,9 @@ def initialize_discussion_info(course): category = " / ".join([x.strip() for x in category.split("/")]) last_category = category.split("/")[-1] discussion_id_map[id] = {"location": module.location, "title": last_category + " / " + title} - unexpanded_category_map[category].append({"title": title, "id": id, "sort_key": sort_key, "start_date": module.lms.start}) + #Handle case where module.lms.start is None + entry_start_date = module.lms.start if module.lms.start else datetime.max.replace(tzinfo=pytz.UTC) + unexpanded_category_map[category].append({"title": title, "id": id, "sort_key": sort_key, "start_date": entry_start_date}) category_map = {"entries": defaultdict(dict), "subcategories": defaultdict(dict)} for category_path, entries in unexpanded_category_map.items(): @@ -220,12 +216,12 @@ def initialize_discussion_info(course): for topic, entry in course.discussion_topics.items(): category_map['entries'][topic] = {"id": entry["id"], "sort_key": entry.get("sort_key", topic), - "start_date": time.gmtime()} + "start_date": datetime.now(UTC())} sort_map_entries(category_map) _DISCUSSIONINFO[course.id]['id_map'] = discussion_id_map _DISCUSSIONINFO[course.id]['category_map'] = category_map - _DISCUSSIONINFO[course.id]['timestamp'] = datetime.now() + _DISCUSSIONINFO[course.id]['timestamp'] = datetime.now(UTC()) class JsonResponse(HttpResponse): @@ -292,7 +288,7 @@ def get_ability(course_id, content, user): 'can_vote': check_permissions_by_view(user, course_id, content, "vote_for_thread" if content['type'] == 'thread' else "vote_for_comment"), } -#TODO: RENAME +# TODO: RENAME def get_annotated_content_info(course_id, content, user, user_info): @@ -310,7 +306,7 @@ def get_annotated_content_info(course_id, content, user, user_info): 'ability': get_ability(course_id, content, user), } -#TODO: RENAME +# TODO: RENAME def get_annotated_content_infos(course_id, thread, user, user_info): diff --git a/lms/djangoapps/foldit/tests.py b/lms/djangoapps/foldit/tests.py index afdd678f06..9928f596be 100644 --- a/lms/djangoapps/foldit/tests.py +++ b/lms/djangoapps/foldit/tests.py @@ -13,6 +13,7 @@ from foldit.models import PuzzleComplete, Score from student.models import UserProfile, unique_id_for_user from datetime import datetime, timedelta +from pytz import UTC log = logging.getLogger(__name__) @@ -28,7 +29,7 @@ class FolditTestCase(TestCase): self.user2 = User.objects.create_user('testuser2', 'test2@test.com', pwd) self.unique_user_id = unique_id_for_user(self.user) self.unique_user_id2 = unique_id_for_user(self.user2) - now = datetime.now() + now = datetime.now(UTC) self.tomorrow = now + timedelta(days=1) self.yesterday = now - timedelta(days=1) @@ -222,7 +223,7 @@ class FolditTestCase(TestCase): verify = {"Verify": verify_code(self.user.email, puzzles_str), "VerifyMethod":"FoldItVerify"} - data = {'SetPuzzlesCompleteVerify': json.dumps(verify), + data = {'SetPuzzlesCompleteVerify': json.dumps(verify), 'SetPuzzlesComplete': puzzles_str} request = self.make_request(data) diff --git a/lms/djangoapps/instructor/views.py b/lms/djangoapps/instructor/views.py index 63869fb48b..e9fff63698 100644 --- a/lms/djangoapps/instructor/views.py +++ b/lms/djangoapps/instructor/views.py @@ -10,7 +10,6 @@ import os import re import requests from requests.status_codes import codes -import urllib from collections import OrderedDict from StringIO import StringIO @@ -20,8 +19,10 @@ from django.contrib.auth.models import User, Group from django.http import HttpResponse from django_future.csrf import ensure_csrf_cookie from django.views.decorators.cache import cache_control -from mitxmako.shortcuts import render_to_response from django.core.urlresolvers import reverse +import xmodule.graders as xmgraders +from xmodule.modulestore.django import modulestore +from xmodule.modulestore.exceptions import ItemNotFoundError from courseware import grades from courseware.access import (has_access, get_access_group_name, @@ -33,13 +34,18 @@ from django_comment_common.models import (Role, FORUM_ROLE_MODERATOR, FORUM_ROLE_COMMUNITY_TA) from django_comment_client.utils import has_forum_access +from instructor.offline_gradecalc import student_grades, offline_grades_available +from instructor_task.api import (get_running_instructor_tasks, + get_instructor_task_history, + submit_rescore_problem_for_all_students, + submit_rescore_problem_for_student, + submit_reset_problem_attempts_for_all_students) +from instructor_task.views import get_task_completion_info +from mitxmako.shortcuts import render_to_response from psychometrics import psychoanalyze from student.models import CourseEnrollment, CourseEnrollmentAllowed -from xmodule.modulestore.django import modulestore -import xmodule.graders as xmgraders import track.views -from .offline_gradecalc import student_grades, offline_grades_available log = logging.getLogger(__name__) @@ -68,6 +74,7 @@ def instructor_dashboard(request, course_id): msg = '' problems = [] plots = [] + datatable = {} # the instructor dashboard page is modal: grades, psychometrics, admin # keep that state in request.session (defaults to grades mode) @@ -78,26 +85,29 @@ def instructor_dashboard(request, course_id): idash_mode = request.session.get('idash_mode', 'Grades') # assemble some course statistics for output to instructor - datatable = {'header': ['Statistic', 'Value'], - 'title': 'Course Statistics At A Glance', - } - data = [['# Enrolled', CourseEnrollment.objects.filter(course_id=course_id).count()]] - data += compute_course_stats(course).items() - if request.user.is_staff: - for field in course.fields: - if getattr(field.scope, 'user', False): - continue - - data.append([field.name, json.dumps(field.read_json(course))]) - for namespace in course.namespaces: - for field in getattr(course, namespace).fields: + def get_course_stats_table(): + datatable = {'header': ['Statistic', 'Value'], + 'title': 'Course Statistics At A Glance', + } + data = [['# Enrolled', CourseEnrollment.objects.filter(course_id=course_id).count()]] + data += compute_course_stats(course).items() + if request.user.is_staff: + for field in course.fields: if getattr(field.scope, 'user', False): continue - data.append(["{}.{}".format(namespace, field.name), json.dumps(field.read_json(course))]) - datatable['data'] = data + data.append([field.name, json.dumps(field.read_json(course))]) + for namespace in course.namespaces: + for field in getattr(course, namespace).fields: + if getattr(field.scope, 'user', False): + continue + + data.append(["{}.{}".format(namespace, field.name), json.dumps(field.read_json(course))]) + datatable['data'] = data + return datatable def return_csv(fn, datatable, fp=None): + """Outputs a CSV file from the contents of a datatable.""" if fp is None: response = HttpResponse(mimetype='text/csv') response['Content-Disposition'] = 'attachment; filename={0}'.format(fn) @@ -111,12 +121,15 @@ def instructor_dashboard(request, course_id): return response def get_staff_group(course): + """Get or create the staff access group""" return get_group(course, 'staff') def get_instructor_group(course): + """Get or create the instructor access group""" return get_group(course, 'instructor') def get_group(course, groupname): + """Get or create an access group""" grpname = get_access_group_name(course, groupname) try: group = Group.objects.get(name=grpname) @@ -136,6 +149,39 @@ def instructor_dashboard(request, course_id): (group, _) = Group.objects.get_or_create(name=name) return group + def get_module_url(urlname): + """ + Construct full URL for a module from its urlname. + + Form is either urlname or modulename/urlname. If no modulename + is provided, "problem" is assumed. + """ + # tolerate an XML suffix in the urlname + if urlname[-4:] == ".xml": + urlname = urlname[:-4] + + # implement default + if '/' not in urlname: + urlname = "problem/" + urlname + + # complete the url using information about the current course: + (org, course_name, _) = course_id.split("/") + return "i4x://" + org + "/" + course_name + "/" + urlname + + def get_student_from_identifier(unique_student_identifier): + """Gets a student object using either an email address or username""" + msg = "" + try: + if "@" in unique_student_identifier: + student = User.objects.get(email=unique_student_identifier) + else: + student = User.objects.get(username=unique_student_identifier) + msg += "Found a single student. " + except User.DoesNotExist: + student = None + msg += "Couldn't find student with that email or username. " + return msg, student + # process actions from form POST action = request.POST.get('action', '') use_offline = request.POST.get('use_offline_grades', False) @@ -205,88 +251,138 @@ def instructor_dashboard(request, course_id): track.views.server_track(request, action, {}, page='idashboard') msg += dump_grading_context(course) - elif "Reset student's attempts" in action or "Delete student state for problem" in action: + elif "Rescore ALL students' problem submissions" in action: + problem_urlname = request.POST.get('problem_for_all_students', '') + problem_url = get_module_url(problem_urlname) + try: + instructor_task = submit_rescore_problem_for_all_students(request, course_id, problem_url) + if instructor_task is None: + msg += 'Failed to create a background task for rescoring "{0}".'.format(problem_url) + else: + track_msg = 'rescore problem {problem} for all students in {course}'.format(problem=problem_url, course=course_id) + track.views.server_track(request, track_msg, {}, page='idashboard') + except ItemNotFoundError as e: + msg += 'Failed to create a background task for rescoring "{0}": problem not found.'.format(problem_url) + except Exception as e: + log.error("Encountered exception from rescore: {0}".format(e)) + msg += 'Failed to create a background task for rescoring "{0}": {1}.'.format(problem_url, e.message) + + elif "Reset ALL students' attempts" in action: + problem_urlname = request.POST.get('problem_for_all_students', '') + problem_url = get_module_url(problem_urlname) + try: + instructor_task = submit_reset_problem_attempts_for_all_students(request, course_id, problem_url) + if instructor_task is None: + msg += 'Failed to create a background task for resetting "{0}".'.format(problem_url) + else: + track_msg = 'reset problem {problem} for all students in {course}'.format(problem=problem_url, course=course_id) + track.views.server_track(request, track_msg, {}, page='idashboard') + except ItemNotFoundError as e: + log.error('Failure to reset: unknown problem "{0}"'.format(e)) + msg += 'Failed to create a background task for resetting "{0}": problem not found.'.format(problem_url) + except Exception as e: + log.error("Encountered exception from reset: {0}".format(e)) + msg += 'Failed to create a background task for resetting "{0}": {1}.'.format(problem_url, e.message) + + elif "Show Background Task History for Student" in action: + # put this before the non-student case, since the use of "in" will cause this to be missed + unique_student_identifier = request.POST.get('unique_student_identifier', '') + message, student = get_student_from_identifier(unique_student_identifier) + if student is None: + msg += message + else: + problem_urlname = request.POST.get('problem_for_student', '') + problem_url = get_module_url(problem_urlname) + message, datatable = get_background_task_table(course_id, problem_url, student) + msg += message + + elif "Show Background Task History" in action: + problem_urlname = request.POST.get('problem_for_all_students', '') + problem_url = get_module_url(problem_urlname) + message, datatable = get_background_task_table(course_id, problem_url) + msg += message + + elif ("Reset student's attempts" in action or + "Delete student state for module" in action or + "Rescore student's problem submission" in action): # get the form data unique_student_identifier = request.POST.get('unique_student_identifier', '') - problem_to_reset = request.POST.get('problem_to_reset', '') - - if problem_to_reset[-4:] == ".xml": - problem_to_reset = problem_to_reset[:-4] - + problem_urlname = request.POST.get('problem_for_student', '') + module_state_key = get_module_url(problem_urlname) # try to uniquely id student by email address or username - try: - if "@" in unique_student_identifier: - student_to_reset = User.objects.get(email=unique_student_identifier) - else: - student_to_reset = User.objects.get(username=unique_student_identifier) - msg += "Found a single student to reset. " - except: - student_to_reset = None - msg += "Couldn't find student with that email or username. " - - if student_to_reset is not None: + message, student = get_student_from_identifier(unique_student_identifier) + msg += message + student_module = None + if student is not None: # find the module in question - if '/' not in problem_to_reset: # allow state of modules other than problem to be reset - problem_to_reset = "problem/" + problem_to_reset # but problem is the default try: - (org, course_name, _) = course_id.split("/") - module_state_key = "i4x://" + org + "/" + course_name + "/" + problem_to_reset - module_to_reset = StudentModule.objects.get(student_id=student_to_reset.id, - course_id=course_id, - module_state_key=module_state_key) - msg += "Found module to reset. " - except Exception: + student_module = StudentModule.objects.get(student_id=student.id, + course_id=course_id, + module_state_key=module_state_key) + msg += "Found module. " + except StudentModule.DoesNotExist: msg += "Couldn't find module with that urlname. " - if "Delete student state for problem" in action: - # delete the state - try: - module_to_reset.delete() - msg += "Deleted student module state for %s!" % module_state_key - except: - msg += "Failed to delete module state for %s/%s" % (unique_student_identifier, problem_to_reset) - else: - # modify the problem's state - try: - # load the state json - problem_state = json.loads(module_to_reset.state) - old_number_of_attempts = problem_state["attempts"] - problem_state["attempts"] = 0 + if student_module is not None: + if "Delete student state for module" in action: + # delete the state + try: + student_module.delete() + msg += "Deleted student module state for %s!" % module_state_key + track_format = 'delete student module state for problem {problem} for student {student} in {course}' + track_msg = track_format.format(problem=problem_url, student=unique_student_identifier, course=course_id) + track.views.server_track(request, track_msg, {}, page='idashboard') + except: + msg += "Failed to delete module state for %s/%s" % (unique_student_identifier, problem_urlname) + elif "Reset student's attempts" in action: + # modify the problem's state + try: + # load the state json + problem_state = json.loads(student_module.state) + old_number_of_attempts = problem_state["attempts"] + problem_state["attempts"] = 0 - # save - module_to_reset.state = json.dumps(problem_state) - module_to_reset.save() - track.views.server_track(request, - '{instructor} reset attempts from {old_attempts} to 0 for {student} on problem {problem} in {course}'.format( - old_attempts=old_number_of_attempts, - student=student_to_reset, - problem=module_to_reset.module_state_key, - instructor=request.user, - course=course_id), - {}, - page='idashboard') - msg += "Module state successfully reset!" - except: - msg += "Couldn't reset module state. " + # save + student_module.state = json.dumps(problem_state) + student_module.save() + track_format = '{instructor} reset attempts from {old_attempts} to 0 for {student} on problem {problem} in {course}' + track_msg = track_format.format(old_attempts=old_number_of_attempts, + student=student, + problem=student_module.module_state_key, + instructor=request.user, + course=course_id) + track.views.server_track(request, track_msg, {}, page='idashboard') + msg += "Module state successfully reset!" + except: + msg += "Couldn't reset module state. " + else: + # "Rescore student's problem submission" case + try: + instructor_task = submit_rescore_problem_for_student(request, course_id, module_state_key, student) + if instructor_task is None: + msg += 'Failed to create a background task for rescoring "{0}" for student {1}.'.format(module_state_key, unique_student_identifier) + else: + track_msg = 'rescore problem {problem} for student {student} in {course}'.format(problem=module_state_key, student=unique_student_identifier, course=course_id) + track.views.server_track(request, track_msg, {}, page='idashboard') + except Exception as e: + log.exception("Encountered exception from rescore: {0}") + msg += 'Failed to create a background task for rescoring "{0}": {1}.'.format(module_state_key, e.message) elif "Get link to student's progress page" in action: unique_student_identifier = request.POST.get('unique_student_identifier', '') - try: - if "@" in unique_student_identifier: - student_to_reset = User.objects.get(email=unique_student_identifier) - else: - student_to_reset = User.objects.get(username=unique_student_identifier) - progress_url = reverse('student_progress', kwargs={'course_id': course_id, 'student_id': student_to_reset.id}) + # try to uniquely id student by email address or username + message, student = get_student_from_identifier(unique_student_identifier) + msg += message + if student is not None: + progress_url = reverse('student_progress', kwargs={'course_id': course_id, 'student_id': student.id}) track.views.server_track(request, '{instructor} requested progress page for {student} in {course}'.format( - student=student_to_reset, + student=student, instructor=request.user, course=course_id), {}, page='idashboard') - msg += " Progress page for username: {1} with email address: {2}.".format(progress_url, student_to_reset.username, student_to_reset.email) - except: - msg += "Couldn't find student with that username. " + msg += " Progress page for username: {1} with email address: {2}.".format(progress_url, student.username, student.email) #---------------------------------------- # export grades to remote gradebook @@ -427,7 +523,7 @@ def instructor_dashboard(request, course_id): if problem_to_dump[-4:] == ".xml": problem_to_dump = problem_to_dump[:-4] try: - (org, course_name, run) = course_id.split("/") + (org, course_name, _) = course_id.split("/") module_state_key = "i4x://" + org + "/" + course_name + "/problem/" + problem_to_dump smdat = StudentModule.objects.filter(course_id=course_id, module_state_key=module_state_key) @@ -625,6 +721,16 @@ def instructor_dashboard(request, course_id): if use_offline: msg += "
    Grades from %s" % offline_grades_available(course_id) + # generate list of pending background tasks + if settings.MITX_FEATURES.get('ENABLE_INSTRUCTOR_BACKGROUND_TASKS'): + instructor_tasks = get_running_instructor_tasks(course_id) + else: + instructor_tasks = None + + # display course stats only if there is no other table to display: + course_stats = None + if not datatable: + course_stats = get_course_stats_table() #---------------------------------------- # context for rendering @@ -634,12 +740,13 @@ def instructor_dashboard(request, course_id): 'instructor_access': instructor_access, 'forum_admin_access': forum_admin_access, 'datatable': datatable, + 'course_stats': course_stats, 'msg': msg, 'modeflag': {idash_mode: 'selectedmode'}, 'problems': problems, # psychometrics 'plots': plots, # psychometrics 'course_errors': modulestore().get_item_errors(course.location), - + 'instructor_tasks': instructor_tasks, 'djangopid': os.getpid(), 'mitx_version': getattr(settings, 'MITX_VERSION_STRING', ''), 'offline_grade_log': offline_grades_available(course_id), @@ -1030,7 +1137,7 @@ def _do_unenroll_students(course_id, students): """Do the actual work of un-enrolling multiple students, presented as a string of emails separated by commas or returns""" - old_students, old_students_lc = get_and_clean_student_list(students) + old_students, _ = get_and_clean_student_list(students) status = dict([x, 'unprocessed'] for x in old_students) for student in old_students: @@ -1054,7 +1161,7 @@ def _do_unenroll_students(course_id, students): try: ce[0].delete() status[student] = "un-enrolled" - except Exception as err: + except Exception: if not isok: status[student] = "Error! Failed to un-enroll" @@ -1113,11 +1220,11 @@ def get_answers_distribution(request, course_id): def compute_course_stats(course): - ''' + """ Compute course statistics, including number of problems, videos, html. course is a CourseDescriptor from the xmodule system. - ''' + """ # walk the course by using get_children() until we come to the leaves; count the # number of different leaf types @@ -1137,10 +1244,10 @@ def compute_course_stats(course): def dump_grading_context(course): - ''' + """ Dump information about course grading context (eg which problems are graded in what assignments) Very useful for debugging grading_policy.json and policy.json - ''' + """ msg = "-----------------------------------------------------------------------------\n" msg += "Course grader:\n" @@ -1164,10 +1271,10 @@ def dump_grading_context(course): msg += "--> Section %s:\n" % (gs) for sec in gsvals: s = sec['section_descriptor'] - format = getattr(s.lms, 'format', None) + grade_format = getattr(s.lms, 'grade_format', None) aname = '' - if format in graders: - g = graders[format] + if grade_format in graders: + g = graders[grade_format] aname = '%s %02d' % (g.short_label, g.index) g.index += 1 elif s.display_name in graders: @@ -1176,8 +1283,73 @@ def dump_grading_context(course): notes = '' if getattr(s, 'score_by_attempt', False): notes = ', score by attempt!' - msg += " %s (format=%s, Assignment=%s%s)\n" % (s.display_name, format, aname, notes) + msg += " %s (grade_format=%s, Assignment=%s%s)\n" % (s.display_name, grade_format, aname, notes) msg += "all descriptors:\n" msg += "length=%d\n" % len(gc['all_descriptors']) msg = '
    %s
    ' % msg.replace('<', '<') return msg + + +def get_background_task_table(course_id, problem_url, student=None): + """ + Construct the "datatable" structure to represent background task history. + + Filters the background task history to the specified course and problem. + If a student is provided, filters to only those tasks for which that student + was specified. + + Returns a tuple of (msg, datatable), where the msg is a possible error message, + and the datatable is the datatable to be used for display. + """ + history_entries = get_instructor_task_history(course_id, problem_url, student) + datatable = {} + msg = "" + # first check to see if there is any history at all + # (note that we don't have to check that the arguments are valid; it + # just won't find any entries.) + if (history_entries.count()) == 0: + if student is not None: + template = 'Failed to find any background tasks for course "{course}", module "{problem}" and student "{student}".' + msg += template.format(course=course_id, problem=problem_url, student=student.username) + else: + msg += 'Failed to find any background tasks for course "{course}" and module "{problem}".'.format(course=course_id, problem=problem_url) + else: + datatable['header'] = ["Task Type", + "Task Id", + "Requester", + "Submitted", + "Duration (sec)", + "Task State", + "Task Status", + "Task Output"] + + datatable['data'] = [] + for instructor_task in history_entries: + # get duration info, if known: + duration_sec = 'unknown' + if hasattr(instructor_task, 'task_output') and instructor_task.task_output is not None: + task_output = json.loads(instructor_task.task_output) + if 'duration_ms' in task_output: + duration_sec = int(task_output['duration_ms'] / 1000.0) + # get progress status message: + success, task_message = get_task_completion_info(instructor_task) + status = "Complete" if success else "Incomplete" + # generate row for this task: + row = [str(instructor_task.task_type), + str(instructor_task.task_id), + str(instructor_task.requester), + instructor_task.created.isoformat(' '), + duration_sec, + str(instructor_task.task_state), + status, + task_message] + datatable['data'].append(row) + + if student is not None: + datatable['title'] = "{course_id} > {location} > {student}".format(course_id=course_id, + location=problem_url, + student=student.username) + else: + datatable['title'] = "{course_id} > {location}".format(course_id=course_id, location=problem_url) + + return msg, datatable diff --git a/lms/djangoapps/instructor_task/__init__.py b/lms/djangoapps/instructor_task/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/lms/djangoapps/instructor_task/api.py b/lms/djangoapps/instructor_task/api.py new file mode 100644 index 0000000000..bd3c5e033a --- /dev/null +++ b/lms/djangoapps/instructor_task/api.py @@ -0,0 +1,164 @@ +""" +API for submitting background tasks by an instructor for a course. + +Also includes methods for getting information about tasks that have +already been submitted, filtered either by running state or input +arguments. + +""" + +from celery.states import READY_STATES + +from xmodule.modulestore.django import modulestore + +from instructor_task.models import InstructorTask +from instructor_task.tasks import (rescore_problem, + reset_problem_attempts, + delete_problem_state) + +from instructor_task.api_helper import (check_arguments_for_rescoring, + encode_problem_and_student_input, + submit_task) + + +def get_running_instructor_tasks(course_id): + """ + Returns a query of InstructorTask objects of running tasks for a given course. + + Used to generate a list of tasks to display on the instructor dashboard. + """ + instructor_tasks = InstructorTask.objects.filter(course_id=course_id) + # exclude states that are "ready" (i.e. not "running", e.g. failure, success, revoked): + for state in READY_STATES: + instructor_tasks = instructor_tasks.exclude(task_state=state) + return instructor_tasks.order_by('-id') + + +def get_instructor_task_history(course_id, problem_url, student=None): + """ + Returns a query of InstructorTask objects of historical tasks for a given course, + that match a particular problem and optionally a student. + """ + _, task_key = encode_problem_and_student_input(problem_url, student) + + instructor_tasks = InstructorTask.objects.filter(course_id=course_id, task_key=task_key) + return instructor_tasks.order_by('-id') + + +def submit_rescore_problem_for_student(request, course_id, problem_url, student): + """ + Request a problem to be rescored as a background task. + + The problem will be rescored for the specified student only. Parameters are the `course_id`, + the `problem_url`, and the `student` as a User object. + The url must specify the location of the problem, using i4x-type notation. + + ItemNotFoundException is raised if the problem doesn't exist, or AlreadyRunningError + if the problem is already being rescored for this student, or NotImplementedError if + the problem doesn't support rescoring. + + This method makes sure the InstructorTask entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, an autocommit buried within here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + + """ + # check arguments: let exceptions return up to the caller. + check_arguments_for_rescoring(course_id, problem_url) + + task_type = 'rescore_problem' + task_class = rescore_problem + task_input, task_key = encode_problem_and_student_input(problem_url, student) + return submit_task(request, task_type, task_class, course_id, task_input, task_key) + + +def submit_rescore_problem_for_all_students(request, course_id, problem_url): + """ + Request a problem to be rescored as a background task. + + The problem will be rescored for all students who have accessed the + particular problem in a course and have provided and checked an answer. + Parameters are the `course_id` and the `problem_url`. + The url must specify the location of the problem, using i4x-type notation. + + ItemNotFoundException is raised if the problem doesn't exist, or AlreadyRunningError + if the problem is already being rescored, or NotImplementedError if the problem doesn't + support rescoring. + + This method makes sure the InstructorTask entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, an autocommit buried within here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + """ + # check arguments: let exceptions return up to the caller. + check_arguments_for_rescoring(course_id, problem_url) + + # check to see if task is already running, and reserve it otherwise + task_type = 'rescore_problem' + task_class = rescore_problem + task_input, task_key = encode_problem_and_student_input(problem_url) + return submit_task(request, task_type, task_class, course_id, task_input, task_key) + + +def submit_reset_problem_attempts_for_all_students(request, course_id, problem_url): + """ + Request to have attempts reset for a problem as a background task. + + The problem's attempts will be reset for all students who have accessed the + particular problem in a course. Parameters are the `course_id` and + the `problem_url`. The url must specify the location of the problem, + using i4x-type notation. + + ItemNotFoundException is raised if the problem doesn't exist, or AlreadyRunningError + if the problem is already being reset. + + This method makes sure the InstructorTask entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, an autocommit buried within here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + """ + # check arguments: make sure that the problem_url is defined + # (since that's currently typed in). If the corresponding module descriptor doesn't exist, + # an exception will be raised. Let it pass up to the caller. + modulestore().get_instance(course_id, problem_url) + + task_type = 'reset_problem_attempts' + task_class = reset_problem_attempts + task_input, task_key = encode_problem_and_student_input(problem_url) + return submit_task(request, task_type, task_class, course_id, task_input, task_key) + + +def submit_delete_problem_state_for_all_students(request, course_id, problem_url): + """ + Request to have state deleted for a problem as a background task. + + The problem's state will be deleted for all students who have accessed the + particular problem in a course. Parameters are the `course_id` and + the `problem_url`. The url must specify the location of the problem, + using i4x-type notation. + + ItemNotFoundException is raised if the problem doesn't exist, or AlreadyRunningError + if the particular problem's state is already being deleted. + + This method makes sure the InstructorTask entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, an autocommit buried within here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + """ + # check arguments: make sure that the problem_url is defined + # (since that's currently typed in). If the corresponding module descriptor doesn't exist, + # an exception will be raised. Let it pass up to the caller. + modulestore().get_instance(course_id, problem_url) + + task_type = 'delete_problem_state' + task_class = delete_problem_state + task_input, task_key = encode_problem_and_student_input(problem_url) + return submit_task(request, task_type, task_class, course_id, task_input, task_key) diff --git a/lms/djangoapps/instructor_task/api_helper.py b/lms/djangoapps/instructor_task/api_helper.py new file mode 100644 index 0000000000..f9febd17d7 --- /dev/null +++ b/lms/djangoapps/instructor_task/api_helper.py @@ -0,0 +1,266 @@ +import hashlib +import json +import logging + +from django.db import transaction + +from celery.result import AsyncResult +from celery.states import READY_STATES, SUCCESS, FAILURE, REVOKED + +from courseware.module_render import get_xqueue_callback_url_prefix + +from xmodule.modulestore.django import modulestore +from instructor_task.models import InstructorTask, PROGRESS + + +log = logging.getLogger(__name__) + + +class AlreadyRunningError(Exception): + """Exception indicating that a background task is already running""" + pass + + +def _task_is_running(course_id, task_type, task_key): + """Checks if a particular task is already running""" + runningTasks = InstructorTask.objects.filter(course_id=course_id, task_type=task_type, task_key=task_key) + # exclude states that are "ready" (i.e. not "running", e.g. failure, success, revoked): + for state in READY_STATES: + runningTasks = runningTasks.exclude(task_state=state) + return len(runningTasks) > 0 + + +@transaction.autocommit +def _reserve_task(course_id, task_type, task_key, task_input, requester): + """ + Creates a database entry to indicate that a task is in progress. + + Throws AlreadyRunningError if the task is already in progress. + Includes the creation of an arbitrary value for task_id, to be + submitted with the task call to celery. + + Autocommit annotation makes sure the database entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, this autocommit here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + + Note that there is a chance of a race condition here, when two users + try to run the same task at almost exactly the same time. One user + could be after the check and before the create when the second user + gets to the check. At that point, both users are able to run their + tasks simultaneously. This is deemed a small enough risk to not + put in further safeguards. + """ + + if _task_is_running(course_id, task_type, task_key): + raise AlreadyRunningError("requested task is already running") + + # Create log entry now, so that future requests will know it's running. + return InstructorTask.create(course_id, task_type, task_key, task_input, requester) + + +def _get_xmodule_instance_args(request): + """ + Calculate parameters needed for instantiating xmodule instances. + + The `request_info` will be passed to a tracking log function, to provide information + about the source of the task request. The `xqueue_callback_url_prefix` is used to + permit old-style xqueue callbacks directly to the appropriate module in the LMS. + """ + request_info = {'username': request.user.username, + 'ip': request.META['REMOTE_ADDR'], + 'agent': request.META.get('HTTP_USER_AGENT', ''), + 'host': request.META['SERVER_NAME'], + } + + xmodule_instance_args = {'xqueue_callback_url_prefix': get_xqueue_callback_url_prefix(request), + 'request_info': request_info, + } + return xmodule_instance_args + + +def _update_instructor_task(instructor_task, task_result): + """ + Updates and possibly saves a InstructorTask entry based on a task Result. + + Used when updated status is requested. + + The `instructor_task` that is passed in is updated in-place, but + is usually not saved. In general, tasks that have finished (either with + success or failure) should have their entries updated by the task itself, + so are not updated here. Tasks that are still running are not updated + while they run. So the one exception to the no-save rule are tasks that + are in a "revoked" state. This may mean that the task never had the + opportunity to update the InstructorTask entry. + + Calculates json to store in "task_output" field of the `instructor_task`, + as well as updating the task_state. + + For a successful task, the json contains the output of the task result. + For a failed task, the json contains "exception", "message", and "traceback" + keys. A revoked task just has a "message" stating it was revoked. + """ + # Pull values out of the result object as close to each other as possible. + # If we wait and check the values later, the values for the state and result + # are more likely to have changed. Pull the state out first, and + # then code assuming that the result may not exactly match the state. + task_id = task_result.task_id + result_state = task_result.state + returned_result = task_result.result + result_traceback = task_result.traceback + + # Assume we don't always update the InstructorTask entry if we don't have to: + entry_needs_saving = False + task_output = None + + if result_state in [PROGRESS, SUCCESS]: + # construct a status message directly from the task result's result: + # it needs to go back with the entry passed in. + log.info("background task (%s), state %s: result: %s", task_id, result_state, returned_result) + task_output = InstructorTask.create_output_for_success(returned_result) + elif result_state == FAILURE: + # on failure, the result's result contains the exception that caused the failure + exception = returned_result + traceback = result_traceback if result_traceback is not None else '' + log.warning("background task (%s) failed: %s %s", task_id, returned_result, traceback) + task_output = InstructorTask.create_output_for_failure(exception, result_traceback) + elif result_state == REVOKED: + # on revocation, the result's result doesn't contain anything + # but we cannot rely on the worker thread to set this status, + # so we set it here. + entry_needs_saving = True + log.warning("background task (%s) revoked.", task_id) + task_output = InstructorTask.create_output_for_revoked() + + # save progress and state into the entry, even if it's not being saved: + # when celery is run in "ALWAYS_EAGER" mode, progress needs to go back + # with the entry passed in. + instructor_task.task_state = result_state + if task_output is not None: + instructor_task.task_output = task_output + + if entry_needs_saving: + instructor_task.save() + + +def get_updated_instructor_task(task_id): + """ + Returns InstructorTask object corresponding to a given `task_id`. + + If the InstructorTask thinks the task is still running, then + the task's result is checked to return an updated state and output. + """ + # First check if the task_id is known + try: + instructor_task = InstructorTask.objects.get(task_id=task_id) + except InstructorTask.DoesNotExist: + log.warning("query for InstructorTask status failed: task_id=(%s) not found", task_id) + return None + + # if the task is not already known to be done, then we need to query + # the underlying task's result object: + if instructor_task.task_state not in READY_STATES: + result = AsyncResult(task_id) + _update_instructor_task(instructor_task, result) + + return instructor_task + + +def get_status_from_instructor_task(instructor_task): + """ + Get the status for a given InstructorTask entry. + + Returns a dict, with the following keys: + 'task_id': id assigned by LMS and used by celery. + 'task_state': state of task as stored in celery's result store. + 'in_progress': boolean indicating if task is still running. + 'task_progress': dict containing progress information. This includes: + 'attempted': number of attempts made + 'updated': number of attempts that "succeeded" + 'total': number of possible subtasks to attempt + 'action_name': user-visible verb to use in status messages. Should be past-tense. + 'duration_ms': how long the task has (or had) been running. + 'exception': name of exception class raised in failed tasks. + 'message': returned for failed and revoked tasks. + 'traceback': optional, returned if task failed and produced a traceback. + + """ + status = {} + + if instructor_task is not None: + # status basic information matching what's stored in InstructorTask: + status['task_id'] = instructor_task.task_id + status['task_state'] = instructor_task.task_state + status['in_progress'] = instructor_task.task_state not in READY_STATES + if instructor_task.task_output is not None: + status['task_progress'] = json.loads(instructor_task.task_output) + + return status + + +def check_arguments_for_rescoring(course_id, problem_url): + """ + Do simple checks on the descriptor to confirm that it supports rescoring. + + Confirms first that the problem_url is defined (since that's currently typed + in). An ItemNotFoundException is raised if the corresponding module + descriptor doesn't exist. NotImplementedError is raised if the + corresponding module doesn't support rescoring calls. + """ + descriptor = modulestore().get_instance(course_id, problem_url) + if not hasattr(descriptor, 'module_class') or not hasattr(descriptor.module_class, 'rescore_problem'): + msg = "Specified module does not support rescoring." + raise NotImplementedError(msg) + + +def encode_problem_and_student_input(problem_url, student=None): + """ + Encode problem_url and optional student into task_key and task_input values. + + `problem_url` is full URL of the problem. + `student` is the user object of the student + """ + if student is not None: + task_input = {'problem_url': problem_url, 'student': student.username} + task_key_stub = "{student}_{problem}".format(student=student.id, problem=problem_url) + else: + task_input = {'problem_url': problem_url} + task_key_stub = "_{problem}".format(problem=problem_url) + + # create the key value by using MD5 hash: + task_key = hashlib.md5(task_key_stub).hexdigest() + + return task_input, task_key + + +def submit_task(request, task_type, task_class, course_id, task_input, task_key): + """ + Helper method to submit a task. + + Reserves the requested task, based on the `course_id`, `task_type`, and `task_key`, + checking to see if the task is already running. The `task_input` is also passed so that + it can be stored in the resulting InstructorTask entry. Arguments are extracted from + the `request` provided by the originating server request. Then the task is submitted to run + asynchronously, using the specified `task_class` and using the task_id constructed for it. + + `AlreadyRunningError` is raised if the task is already running. + + The _reserve_task method makes sure the InstructorTask entry is committed. + When called from any view that is wrapped by TransactionMiddleware, + and thus in a "commit-on-success" transaction, an autocommit buried within here + will cause any pending transaction to be committed by a successful + save here. Any future database operations will take place in a + separate transaction. + + """ + # check to see if task is already running, and reserve it otherwise: + instructor_task = _reserve_task(course_id, task_type, task_key, task_input, request.user) + + # submit task: + task_id = instructor_task.task_id + task_args = [instructor_task.id, _get_xmodule_instance_args(request)] + task_class.apply_async(task_args, task_id=task_id) + + return instructor_task diff --git a/lms/djangoapps/instructor_task/migrations/0001_initial.py b/lms/djangoapps/instructor_task/migrations/0001_initial.py new file mode 100644 index 0000000000..4e12f292c1 --- /dev/null +++ b/lms/djangoapps/instructor_task/migrations/0001_initial.py @@ -0,0 +1,86 @@ +# -*- coding: utf-8 -*- +import datetime +from south.db import db +from south.v2 import SchemaMigration +from django.db import models + + +class Migration(SchemaMigration): + + def forwards(self, orm): + # Adding model 'InstructorTask' + db.create_table('instructor_task_instructortask', ( + ('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), + ('task_type', self.gf('django.db.models.fields.CharField')(max_length=50, db_index=True)), + ('course_id', self.gf('django.db.models.fields.CharField')(max_length=255, db_index=True)), + ('task_key', self.gf('django.db.models.fields.CharField')(max_length=255, db_index=True)), + ('task_input', self.gf('django.db.models.fields.CharField')(max_length=255)), + ('task_id', self.gf('django.db.models.fields.CharField')(max_length=255, db_index=True)), + ('task_state', self.gf('django.db.models.fields.CharField')(max_length=50, null=True, db_index=True)), + ('task_output', self.gf('django.db.models.fields.CharField')(max_length=1024, null=True)), + ('requester', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), + ('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, null=True, blank=True)), + ('updated', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)), + )) + db.send_create_signal('instructor_task', ['InstructorTask']) + + + def backwards(self, orm): + # Deleting model 'InstructorTask' + db.delete_table('instructor_task_instructortask') + + + models = { + 'auth.group': { + 'Meta': {'object_name': 'Group'}, + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), + 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}), + 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}) + }, + 'auth.permission': { + 'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'}, + 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}), + 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}), + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), + 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}) + }, + 'auth.user': { + 'Meta': {'object_name': 'User'}, + 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), + 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}), + 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), + 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}), + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), + 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), + 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), + 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), + 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), + 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), + 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}), + 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}), + 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}) + }, + 'contenttypes.contenttype': { + 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"}, + 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}), + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), + 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}), + 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}) + }, + 'instructor_task.instructortask': { + 'Meta': {'object_name': 'InstructorTask'}, + 'course_id': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}), + 'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'blank': 'True'}), + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), + 'requester': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}), + 'task_id': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}), + 'task_input': ('django.db.models.fields.CharField', [], {'max_length': '255'}), + 'task_key': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}), + 'task_output': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'null': 'True'}), + 'task_state': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True', 'db_index': 'True'}), + 'task_type': ('django.db.models.fields.CharField', [], {'max_length': '50', 'db_index': 'True'}), + 'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}) + } + } + + complete_apps = ['instructor_task'] \ No newline at end of file diff --git a/lms/djangoapps/instructor_task/migrations/__init__.py b/lms/djangoapps/instructor_task/migrations/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/lms/djangoapps/instructor_task/models.py b/lms/djangoapps/instructor_task/models.py new file mode 100644 index 0000000000..f1ebf814fa --- /dev/null +++ b/lms/djangoapps/instructor_task/models.py @@ -0,0 +1,156 @@ +""" +WE'RE USING MIGRATIONS! + +If you make changes to this model, be sure to create an appropriate migration +file and check it in at the same time as your model changes. To do that, + +1. Go to the edx-platform dir +2. ./manage.py schemamigration instructor_task --auto description_of_your_change +3. Add the migration file created in edx-platform/lms/djangoapps/instructor_task/migrations/ + + +ASSUMPTIONS: modules have unique IDs, even across different module_types + +""" +from uuid import uuid4 +import json + +from django.contrib.auth.models import User +from django.db import models, transaction + + +# define custom states used by InstructorTask +QUEUING = 'QUEUING' +PROGRESS = 'PROGRESS' + + +class InstructorTask(models.Model): + """ + Stores information about background tasks that have been submitted to + perform work by an instructor (or course staff). + Examples include grading and rescoring. + + `task_type` identifies the kind of task being performed, e.g. rescoring. + `course_id` uses the course run's unique id to identify the course. + `task_key` stores relevant input arguments encoded into key value for testing to see + if the task is already running (together with task_type and course_id). + `task_input` stores input arguments as JSON-serialized dict, for reporting purposes. + Examples include url of problem being rescored, id of student if only one student being rescored. + + `task_id` stores the id used by celery for the background task. + `task_state` stores the last known state of the celery task + `task_output` stores the output of the celery task. + Format is a JSON-serialized dict. Content varies by task_type and task_state. + + `requester` stores id of user who submitted the task + `created` stores date that entry was first created + `updated` stores date that entry was last modified + """ + task_type = models.CharField(max_length=50, db_index=True) + course_id = models.CharField(max_length=255, db_index=True) + task_key = models.CharField(max_length=255, db_index=True) + task_input = models.CharField(max_length=255) + task_id = models.CharField(max_length=255, db_index=True) # max_length from celery_taskmeta + task_state = models.CharField(max_length=50, null=True, db_index=True) # max_length from celery_taskmeta + task_output = models.CharField(max_length=1024, null=True) + requester = models.ForeignKey(User, db_index=True) + created = models.DateTimeField(auto_now_add=True, null=True) + updated = models.DateTimeField(auto_now=True) + + def __repr__(self): + return 'InstructorTask<%r>' % ({ + 'task_type': self.task_type, + 'course_id': self.course_id, + 'task_input': self.task_input, + 'task_id': self.task_id, + 'task_state': self.task_state, + 'task_output': self.task_output, + },) + + def __unicode__(self): + return unicode(repr(self)) + + @classmethod + def create(cls, course_id, task_type, task_key, task_input, requester): + # create the task_id here, and pass it into celery: + task_id = str(uuid4()) + + json_task_input = json.dumps(task_input) + + # check length of task_input, and return an exception if it's too long: + if len(json_task_input) > 255: + fmt = 'Task input longer than 255: "{input}" for "{task}" of "{course}"' + msg = fmt.format(input=json_task_input, task=task_type, course=course_id) + raise ValueError(msg) + + # create the task, then save it: + instructor_task = cls(course_id=course_id, + task_type=task_type, + task_id=task_id, + task_key=task_key, + task_input=json_task_input, + task_state=QUEUING, + requester=requester) + instructor_task.save_now() + + return instructor_task + + @transaction.autocommit + def save_now(self): + """Writes InstructorTask immediately, ensuring the transaction is committed.""" + self.save() + + @staticmethod + def create_output_for_success(returned_result): + """ + Converts successful result to output format. + + Raises a ValueError exception if the output is too long. + """ + # In future, there should be a check here that the resulting JSON + # will fit in the column. In the meantime, just return an exception. + json_output = json.dumps(returned_result) + if len(json_output) > 1023: + raise ValueError("Length of task output is too long: {0}".format(json_output)) + return json_output + + @staticmethod + def create_output_for_failure(exception, traceback_string): + """ + Converts failed result information to output format. + + Traceback information is truncated or not included if it would result in an output string + that would not fit in the database. If the output is still too long, then the + exception message is also truncated. + + Truncation is indicated by adding "..." to the end of the value. + """ + tag = '...' + task_progress = {'exception': type(exception).__name__, 'message': str(exception.message)} + if traceback_string is not None: + # truncate any traceback that goes into the InstructorTask model: + task_progress['traceback'] = traceback_string + json_output = json.dumps(task_progress) + # if the resulting output is too long, then first shorten the + # traceback, and then the message, until it fits. + too_long = len(json_output) - 1023 + if too_long > 0: + if traceback_string is not None: + if too_long >= len(traceback_string) - len(tag): + # remove the traceback entry entirely (so no key or value) + del task_progress['traceback'] + too_long -= (len(traceback_string) + len('traceback')) + else: + # truncate the traceback: + task_progress['traceback'] = traceback_string[:-(too_long + len(tag))] + tag + too_long = 0 + if too_long > 0: + # we need to shorten the message: + task_progress['message'] = task_progress['message'][:-(too_long + len(tag))] + tag + json_output = json.dumps(task_progress) + return json_output + + @staticmethod + def create_output_for_revoked(): + """Creates standard message to store in output format for revoked tasks.""" + return json.dumps({'message': 'Task revoked before running'}) diff --git a/lms/djangoapps/instructor_task/tasks.py b/lms/djangoapps/instructor_task/tasks.py new file mode 100644 index 0000000000..b045de470a --- /dev/null +++ b/lms/djangoapps/instructor_task/tasks.py @@ -0,0 +1,97 @@ +""" +This file contains tasks that are designed to perform background operations on the +running state of a course. + +At present, these tasks all operate on StudentModule objects in one way or another, +so they share a visitor architecture. Each task defines an "update function" that +takes a module_descriptor, a particular StudentModule object, and xmodule_instance_args. + +A task may optionally specify a "filter function" that takes a query for StudentModule +objects, and adds additional filter clauses. + +A task also passes through "xmodule_instance_args", that are used to provide +information to our code that instantiates xmodule instances. + +The task definition then calls the traversal function, passing in the three arguments +above, along with the id value for an InstructorTask object. The InstructorTask +object contains a 'task_input' row which is a JSON-encoded dict containing +a problem URL and optionally a student. These are used to set up the initial value +of the query for traversing StudentModule objects. + +""" +from celery import task +from instructor_task.tasks_helper import (update_problem_module_state, + rescore_problem_module_state, + reset_attempts_module_state, + delete_problem_module_state) + + +@task +def rescore_problem(entry_id, xmodule_instance_args): + """Rescores a problem in a course, for all students or one specific student. + + `entry_id` is the id value of the InstructorTask entry that corresponds to this task. + The entry contains the `course_id` that identifies the course, as well as the + `task_input`, which contains task-specific input. + + The task_input should be a dict with the following entries: + + 'problem_url': the full URL to the problem to be rescored. (required) + + 'student': the identifier (username or email) of a particular user whose + problem submission should be rescored. If not specified, all problem + submissions for the problem will be rescored. + + `xmodule_instance_args` provides information needed by _get_module_instance_for_task() + to instantiate an xmodule instance. + """ + action_name = 'rescored' + update_fcn = rescore_problem_module_state + filter_fcn = lambda(modules_to_update): modules_to_update.filter(state__contains='"done": true') + return update_problem_module_state(entry_id, + update_fcn, action_name, filter_fcn=filter_fcn, + xmodule_instance_args=xmodule_instance_args) + + +@task +def reset_problem_attempts(entry_id, xmodule_instance_args): + """Resets problem attempts to zero for a particular problem for all students in a course. + + `entry_id` is the id value of the InstructorTask entry that corresponds to this task. + The entry contains the `course_id` that identifies the course, as well as the + `task_input`, which contains task-specific input. + + The task_input should be a dict with the following entries: + + 'problem_url': the full URL to the problem to be rescored. (required) + + `xmodule_instance_args` provides information needed by _get_module_instance_for_task() + to instantiate an xmodule instance. + """ + action_name = 'reset' + update_fcn = reset_attempts_module_state + return update_problem_module_state(entry_id, + update_fcn, action_name, filter_fcn=None, + xmodule_instance_args=xmodule_instance_args) + + +@task +def delete_problem_state(entry_id, xmodule_instance_args): + """Deletes problem state entirely for all students on a particular problem in a course. + + `entry_id` is the id value of the InstructorTask entry that corresponds to this task. + The entry contains the `course_id` that identifies the course, as well as the + `task_input`, which contains task-specific input. + + The task_input should be a dict with the following entries: + + 'problem_url': the full URL to the problem to be rescored. (required) + + `xmodule_instance_args` provides information needed by _get_module_instance_for_task() + to instantiate an xmodule instance. + """ + action_name = 'deleted' + update_fcn = delete_problem_module_state + return update_problem_module_state(entry_id, + update_fcn, action_name, filter_fcn=None, + xmodule_instance_args=xmodule_instance_args) diff --git a/lms/djangoapps/instructor_task/tasks_helper.py b/lms/djangoapps/instructor_task/tasks_helper.py new file mode 100644 index 0000000000..c5a9b4d177 --- /dev/null +++ b/lms/djangoapps/instructor_task/tasks_helper.py @@ -0,0 +1,388 @@ +""" +This file contains tasks that are designed to perform background operations on the +running state of a course. + +""" + +import json +from time import time +from sys import exc_info +from traceback import format_exc + +from celery import current_task +from celery.utils.log import get_task_logger +from celery.signals import worker_process_init +from celery.states import SUCCESS, FAILURE + +from django.contrib.auth.models import User +from django.db import transaction +from dogapi import dog_stats_api + +from xmodule.modulestore.django import modulestore + +import mitxmako.middleware as middleware +from track.views import task_track + +from courseware.models import StudentModule +from courseware.model_data import ModelDataCache +from courseware.module_render import get_module_for_descriptor_internal +from instructor_task.models import InstructorTask, PROGRESS + +# define different loggers for use within tasks and on client side +TASK_LOG = get_task_logger(__name__) + +# define value to use when no task_id is provided: +UNKNOWN_TASK_ID = 'unknown-task_id' + + +def initialize_mako(sender=None, conf=None, **kwargs): + """ + Get mako templates to work on celery worker server's worker thread. + + The initialization of Mako templating is usually done when Django is + initializing middleware packages as part of processing a server request. + When this is run on a celery worker server, no such initialization is + called. + + To make sure that we don't load this twice (just in case), we look for the + result: the defining of the lookup paths for templates. + """ + if 'main' not in middleware.lookup: + TASK_LOG.info("Initializing Mako middleware explicitly") + middleware.MakoMiddleware() + +# Actually make the call to define the hook: +worker_process_init.connect(initialize_mako) + + +class UpdateProblemModuleStateError(Exception): + """ + Error signaling a fatal condition while updating problem modules. + + Used when the current module cannot be processed and no more + modules should be attempted. + """ + pass + + +def _get_current_task(): + """Stub to make it easier to test without actually running Celery""" + return current_task + + +def _perform_module_state_update(course_id, module_state_key, student_identifier, update_fcn, action_name, filter_fcn, + xmodule_instance_args): + """ + Performs generic update by visiting StudentModule instances with the update_fcn provided. + + StudentModule instances are those that match the specified `course_id` and `module_state_key`. + If `student_identifier` is not None, it is used as an additional filter to limit the modules to those belonging + to that student. If `student_identifier` is None, performs update on modules for all students on the specified problem. + + If a `filter_fcn` is not None, it is applied to the query that has been constructed. It takes one + argument, which is the query being filtered, and returns the filtered version of the query. + + The `update_fcn` is called on each StudentModule that passes the resulting filtering. + It is passed three arguments: the module_descriptor for the module pointed to by the + module_state_key, the particular StudentModule to update, and the xmodule_instance_args being + passed through. If the value returned by the update function evaluates to a boolean True, + the update is successful; False indicates the update on the particular student module failed. + A raised exception indicates a fatal condition -- that no other student modules should be considered. + + The return value is a dict containing the task's results, with the following keys: + + 'attempted': number of attempts made + 'updated': number of attempts that "succeeded" + 'total': number of possible subtasks to attempt + 'action_name': user-visible verb to use in status messages. Should be past-tense. + Pass-through of input `action_name`. + 'duration_ms': how long the task has (or had) been running. + + Because this is run internal to a task, it does not catch exceptions. These are allowed to pass up to the + next level, so that it can set the failure modes and capture the error trace in the InstructorTask and the + result object. + + """ + # get start time for task: + start_time = time() + + # find the problem descriptor: + module_descriptor = modulestore().get_instance(course_id, module_state_key) + + # find the module in question + modules_to_update = StudentModule.objects.filter(course_id=course_id, + module_state_key=module_state_key) + + # give the option of rescoring an individual student. If not specified, + # then rescores all students who have responded to a problem so far + student = None + if student_identifier is not None: + # if an identifier is supplied, then look for the student, + # and let it throw an exception if none is found. + if "@" in student_identifier: + student = User.objects.get(email=student_identifier) + elif student_identifier is not None: + student = User.objects.get(username=student_identifier) + + if student is not None: + modules_to_update = modules_to_update.filter(student_id=student.id) + + if filter_fcn is not None: + modules_to_update = filter_fcn(modules_to_update) + + # perform the main loop + num_updated = 0 + num_attempted = 0 + num_total = modules_to_update.count() + + def get_task_progress(): + """Return a dict containing info about current task""" + current_time = time() + progress = {'action_name': action_name, + 'attempted': num_attempted, + 'updated': num_updated, + 'total': num_total, + 'duration_ms': int((current_time - start_time) * 1000), + } + return progress + + task_progress = get_task_progress() + _get_current_task().update_state(state=PROGRESS, meta=task_progress) + for module_to_update in modules_to_update: + num_attempted += 1 + # There is no try here: if there's an error, we let it throw, and the task will + # be marked as FAILED, with a stack trace. + with dog_stats_api.timer('instructor_tasks.module.time.step', tags=['action:{name}'.format(name=action_name)]): + if update_fcn(module_descriptor, module_to_update, xmodule_instance_args): + # If the update_fcn returns true, then it performed some kind of work. + # Logging of failures is left to the update_fcn itself. + num_updated += 1 + + # update task status: + task_progress = get_task_progress() + _get_current_task().update_state(state=PROGRESS, meta=task_progress) + + return task_progress + + +def update_problem_module_state(entry_id, update_fcn, action_name, filter_fcn, + xmodule_instance_args): + """ + Performs generic update by visiting StudentModule instances with the update_fcn provided. + + The `entry_id` is the primary key for the InstructorTask entry representing the task. This function + updates the entry on success and failure of the _perform_module_state_update function it + wraps. It is setting the entry's value for task_state based on what Celery would set it to once + the task returns to Celery: FAILURE if an exception is encountered, and SUCCESS if it returns normally. + Other arguments are pass-throughs to _perform_module_state_update, and documented there. + + If no exceptions are raised, a dict containing the task's result is returned, with the following keys: + + 'attempted': number of attempts made + 'updated': number of attempts that "succeeded" + 'total': number of possible subtasks to attempt + 'action_name': user-visible verb to use in status messages. Should be past-tense. + Pass-through of input `action_name`. + 'duration_ms': how long the task has (or had) been running. + + Before returning, this is also JSON-serialized and stored in the task_output column of the InstructorTask entry. + + If an exception is raised internally, it is caught and recorded in the InstructorTask entry. + This is also a JSON-serialized dict, stored in the task_output column, containing the following keys: + + 'exception': type of exception object + 'message': error message from exception object + 'traceback': traceback information (truncated if necessary) + + Once the exception is caught, it is raised again and allowed to pass up to the + task-running level, so that it can also set the failure modes and capture the error trace in the + result object that Celery creates. + + """ + + # get the InstructorTask to be updated. If this fails, then let the exception return to Celery. + # There's no point in catching it here. + entry = InstructorTask.objects.get(pk=entry_id) + + # get inputs to use in this task from the entry: + task_id = entry.task_id + course_id = entry.course_id + task_input = json.loads(entry.task_input) + module_state_key = task_input.get('problem_url') + student_ident = task_input['student'] if 'student' in task_input else None + + fmt = 'Starting to update problem modules as task "{task_id}": course "{course_id}" problem "{state_key}": nothing {action} yet' + TASK_LOG.info(fmt.format(task_id=task_id, course_id=course_id, state_key=module_state_key, action=action_name)) + + # add task_id to xmodule_instance_args, so that it can be output with tracking info: + if xmodule_instance_args is not None: + xmodule_instance_args['task_id'] = task_id + + # Now that we have an entry we can try to catch failures: + task_progress = None + try: + # Check that the task_id submitted in the InstructorTask matches the current task + # that is running. + request_task_id = _get_current_task().request.id + if task_id != request_task_id: + fmt = 'Requested task "{task_id}" did not match actual task "{actual_id}"' + message = fmt.format(task_id=task_id, course_id=course_id, state_key=module_state_key, actual_id=request_task_id) + TASK_LOG.error(message) + raise UpdateProblemModuleStateError(message) + + # Now do the work: + with dog_stats_api.timer('instructor_tasks.module.time.overall', tags=['action:{name}'.format(name=action_name)]): + task_progress = _perform_module_state_update(course_id, module_state_key, student_ident, update_fcn, + action_name, filter_fcn, xmodule_instance_args) + # If we get here, we assume we've succeeded, so update the InstructorTask entry in anticipation. + # But we do this within the try, in case creating the task_output causes an exception to be + # raised. + entry.task_output = InstructorTask.create_output_for_success(task_progress) + entry.task_state = SUCCESS + entry.save_now() + + except Exception: + # try to write out the failure to the entry before failing + _, exception, traceback = exc_info() + traceback_string = format_exc(traceback) if traceback is not None else '' + TASK_LOG.warning("background task (%s) failed: %s %s", task_id, exception, traceback_string) + entry.task_output = InstructorTask.create_output_for_failure(exception, traceback_string) + entry.task_state = FAILURE + entry.save_now() + raise + + # log and exit, returning task_progress info as task result: + fmt = 'Finishing task "{task_id}": course "{course_id}" problem "{state_key}": final: {progress}' + TASK_LOG.info(fmt.format(task_id=task_id, course_id=course_id, state_key=module_state_key, progress=task_progress)) + return task_progress + + +def _get_task_id_from_xmodule_args(xmodule_instance_args): + """Gets task_id from `xmodule_instance_args` dict, or returns default value if missing.""" + return xmodule_instance_args.get('task_id', UNKNOWN_TASK_ID) if xmodule_instance_args is not None else UNKNOWN_TASK_ID + + +def _get_module_instance_for_task(course_id, student, module_descriptor, xmodule_instance_args=None, + grade_bucket_type=None): + """ + Fetches a StudentModule instance for a given `course_id`, `student` object, and `module_descriptor`. + + `xmodule_instance_args` is used to provide information for creating a track function and an XQueue callback. + These are passed, along with `grade_bucket_type`, to get_module_for_descriptor_internal, which sidesteps + the need for a Request object when instantiating an xmodule instance. + """ + # reconstitute the problem's corresponding XModule: + model_data_cache = ModelDataCache.cache_for_descriptor_descendents(course_id, student, module_descriptor) + + # get request-related tracking information from args passthrough, and supplement with task-specific + # information: + request_info = xmodule_instance_args.get('request_info', {}) if xmodule_instance_args is not None else {} + task_info = {"student": student.username, "task_id": _get_task_id_from_xmodule_args(xmodule_instance_args)} + + def make_track_function(): + ''' + Make a tracking function that logs what happened. + + For insertion into ModuleSystem, and used by CapaModule, which will + provide the event_type (as string) and event (as dict) as arguments. + The request_info and task_info (and page) are provided here. + ''' + return lambda event_type, event: task_track(request_info, task_info, event_type, event, page='x_module_task') + + xqueue_callback_url_prefix = xmodule_instance_args.get('xqueue_callback_url_prefix', '') \ + if xmodule_instance_args is not None else '' + + return get_module_for_descriptor_internal(student, module_descriptor, model_data_cache, course_id, + make_track_function(), xqueue_callback_url_prefix, + grade_bucket_type=grade_bucket_type) + + +@transaction.autocommit +def rescore_problem_module_state(module_descriptor, student_module, xmodule_instance_args=None): + ''' + Takes an XModule descriptor and a corresponding StudentModule object, and + performs rescoring on the student's problem submission. + + Throws exceptions if the rescoring is fatal and should be aborted if in a loop. + In particular, raises UpdateProblemModuleStateError if module fails to instantiate, + or if the module doesn't support rescoring. + + Returns True if problem was successfully rescored for the given student, and False + if problem encountered some kind of error in rescoring. + ''' + # unpack the StudentModule: + course_id = student_module.course_id + student = student_module.student + module_state_key = student_module.module_state_key + instance = _get_module_instance_for_task(course_id, student, module_descriptor, xmodule_instance_args, grade_bucket_type='rescore') + + if instance is None: + # Either permissions just changed, or someone is trying to be clever + # and load something they shouldn't have access to. + msg = "No module {loc} for student {student}--access denied?".format(loc=module_state_key, + student=student) + TASK_LOG.debug(msg) + raise UpdateProblemModuleStateError(msg) + + if not hasattr(instance, 'rescore_problem'): + # This should also not happen, since it should be already checked in the caller, + # but check here to be sure. + msg = "Specified problem does not support rescoring." + raise UpdateProblemModuleStateError(msg) + + result = instance.rescore_problem() + if 'success' not in result: + # don't consider these fatal, but false means that the individual call didn't complete: + TASK_LOG.warning(u"error processing rescore call for course {course}, problem {loc} and student {student}: " + "unexpected response {msg}".format(msg=result, course=course_id, loc=module_state_key, student=student)) + return False + elif result['success'] not in ['correct', 'incorrect']: + TASK_LOG.warning(u"error processing rescore call for course {course}, problem {loc} and student {student}: " + "{msg}".format(msg=result['success'], course=course_id, loc=module_state_key, student=student)) + return False + else: + TASK_LOG.debug(u"successfully processed rescore call for course {course}, problem {loc} and student {student}: " + "{msg}".format(msg=result['success'], course=course_id, loc=module_state_key, student=student)) + return True + + +@transaction.autocommit +def reset_attempts_module_state(_module_descriptor, student_module, xmodule_instance_args=None): + """ + Resets problem attempts to zero for specified `student_module`. + + Always returns true, indicating success, if it doesn't raise an exception due to database error. + """ + problem_state = json.loads(student_module.state) if student_module.state else {} + if 'attempts' in problem_state: + old_number_of_attempts = problem_state["attempts"] + if old_number_of_attempts > 0: + problem_state["attempts"] = 0 + # convert back to json and save + student_module.state = json.dumps(problem_state) + student_module.save() + # get request-related tracking information from args passthrough, + # and supplement with task-specific information: + request_info = xmodule_instance_args.get('request_info', {}) if xmodule_instance_args is not None else {} + task_info = {"student": student_module.student.username, "task_id": _get_task_id_from_xmodule_args(xmodule_instance_args)} + event_info = {"old_attempts": old_number_of_attempts, "new_attempts": 0} + task_track(request_info, task_info, 'problem_reset_attempts', event_info, page='x_module_task') + + # consider the reset to be successful, even if no update was performed. (It's just "optimized".) + return True + + +@transaction.autocommit +def delete_problem_module_state(_module_descriptor, student_module, xmodule_instance_args=None): + """ + Delete the StudentModule entry. + + Always returns true, indicating success, if it doesn't raise an exception due to database error. + """ + student_module.delete() + # get request-related tracking information from args passthrough, + # and supplement with task-specific information: + request_info = xmodule_instance_args.get('request_info', {}) if xmodule_instance_args is not None else {} + task_info = {"student": student_module.student.username, "task_id": _get_task_id_from_xmodule_args(xmodule_instance_args)} + task_track(request_info, task_info, 'problem_delete_state', {}, page='x_module_task') + return True diff --git a/lms/djangoapps/instructor_task/tests/__init__.py b/lms/djangoapps/instructor_task/tests/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/lms/djangoapps/instructor_task/tests/factories.py b/lms/djangoapps/instructor_task/tests/factories.py new file mode 100644 index 0000000000..e54c007a81 --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/factories.py @@ -0,0 +1,19 @@ +import json + +from factory import DjangoModelFactory, SubFactory +from student.tests.factories import UserFactory as StudentUserFactory +from instructor_task.models import InstructorTask +from celery.states import PENDING + + +class InstructorTaskFactory(DjangoModelFactory): + FACTORY_FOR = InstructorTask + + task_type = 'rescore_problem' + course_id = "MITx/999/Robot_Super_Course" + task_input = json.dumps({}) + task_key = None + task_id = None + task_state = PENDING + task_output = None + requester = SubFactory(StudentUserFactory) diff --git a/lms/djangoapps/instructor_task/tests/test_api.py b/lms/djangoapps/instructor_task/tests/test_api.py new file mode 100644 index 0000000000..841fdca8a0 --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/test_api.py @@ -0,0 +1,138 @@ +""" +Test for LMS instructor background task queue management +""" + +from xmodule.modulestore.exceptions import ItemNotFoundError + +from courseware.tests.factories import UserFactory + +from instructor_task.api import (get_running_instructor_tasks, + get_instructor_task_history, + submit_rescore_problem_for_all_students, + submit_rescore_problem_for_student, + submit_reset_problem_attempts_for_all_students, + submit_delete_problem_state_for_all_students) + +from instructor_task.api_helper import AlreadyRunningError +from instructor_task.models import InstructorTask, PROGRESS +from instructor_task.tests.test_base import (InstructorTaskTestCase, + InstructorTaskModuleTestCase, + TEST_COURSE_ID) + + +class InstructorTaskReportTest(InstructorTaskTestCase): + """ + Tests API and view methods that involve the reporting of status for background tasks. + """ + + def test_get_running_instructor_tasks(self): + # when fetching running tasks, we get all running tasks, and only running tasks + for _ in range(1, 5): + self._create_failure_entry() + self._create_success_entry() + progress_task_ids = [self._create_progress_entry().task_id for _ in range(1, 5)] + task_ids = [instructor_task.task_id for instructor_task in get_running_instructor_tasks(TEST_COURSE_ID)] + self.assertEquals(set(task_ids), set(progress_task_ids)) + + def test_get_instructor_task_history(self): + # when fetching historical tasks, we get all tasks, including running tasks + expected_ids = [] + for _ in range(1, 5): + expected_ids.append(self._create_failure_entry().task_id) + expected_ids.append(self._create_success_entry().task_id) + expected_ids.append(self._create_progress_entry().task_id) + task_ids = [instructor_task.task_id for instructor_task + in get_instructor_task_history(TEST_COURSE_ID, self.problem_url)] + self.assertEquals(set(task_ids), set(expected_ids)) + + +class InstructorTaskSubmitTest(InstructorTaskModuleTestCase): + """Tests API methods that involve the submission of background tasks.""" + + def setUp(self): + self.initialize_course() + self.student = UserFactory.create(username="student", email="student@edx.org") + self.instructor = UserFactory.create(username="instructor", email="instructor@edx.org") + + def test_submit_nonexistent_modules(self): + # confirm that a rescore of a non-existent module returns an exception + problem_url = InstructorTaskModuleTestCase.problem_location("NonexistentProblem") + course_id = self.course.id + request = None + with self.assertRaises(ItemNotFoundError): + submit_rescore_problem_for_student(request, course_id, problem_url, self.student) + with self.assertRaises(ItemNotFoundError): + submit_rescore_problem_for_all_students(request, course_id, problem_url) + with self.assertRaises(ItemNotFoundError): + submit_reset_problem_attempts_for_all_students(request, course_id, problem_url) + with self.assertRaises(ItemNotFoundError): + submit_delete_problem_state_for_all_students(request, course_id, problem_url) + + def test_submit_nonrescorable_modules(self): + # confirm that a rescore of an existent but unscorable module returns an exception + # (Note that it is easier to test a scoreable but non-rescorable module in test_tasks, + # where we are creating real modules.) + problem_url = self.problem_section.location.url() + course_id = self.course.id + request = None + with self.assertRaises(NotImplementedError): + submit_rescore_problem_for_student(request, course_id, problem_url, self.student) + with self.assertRaises(NotImplementedError): + submit_rescore_problem_for_all_students(request, course_id, problem_url) + + def _test_submit_with_long_url(self, task_function, student=None): + problem_url_name = 'x' * 255 + self.define_option_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + with self.assertRaises(ValueError): + if student is not None: + task_function(self.create_task_request(self.instructor), self.course.id, location, student) + else: + task_function(self.create_task_request(self.instructor), self.course.id, location) + + def test_submit_rescore_all_with_long_url(self): + self._test_submit_with_long_url(submit_rescore_problem_for_all_students) + + def test_submit_rescore_student_with_long_url(self): + self._test_submit_with_long_url(submit_rescore_problem_for_student, self.student) + + def test_submit_reset_all_with_long_url(self): + self._test_submit_with_long_url(submit_reset_problem_attempts_for_all_students) + + def test_submit_delete_all_with_long_url(self): + self._test_submit_with_long_url(submit_delete_problem_state_for_all_students) + + def _test_submit_task(self, task_function, student=None): + # tests submit, and then tests a second identical submission. + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + if student is not None: + instructor_task = task_function(self.create_task_request(self.instructor), + self.course.id, location, student) + else: + instructor_task = task_function(self.create_task_request(self.instructor), + self.course.id, location) + + # test resubmitting, by updating the existing record: + instructor_task = InstructorTask.objects.get(id=instructor_task.id) + instructor_task.task_state = PROGRESS + instructor_task.save() + + with self.assertRaises(AlreadyRunningError): + if student is not None: + task_function(self.create_task_request(self.instructor), self.course.id, location, student) + else: + task_function(self.create_task_request(self.instructor), self.course.id, location) + + def test_submit_rescore_all(self): + self._test_submit_task(submit_rescore_problem_for_all_students) + + def test_submit_rescore_student(self): + self._test_submit_task(submit_rescore_problem_for_student, self.student) + + def test_submit_reset_all(self): + self._test_submit_task(submit_reset_problem_attempts_for_all_students) + + def test_submit_delete_all(self): + self._test_submit_task(submit_delete_problem_state_for_all_students) diff --git a/lms/djangoapps/instructor_task/tests/test_base.py b/lms/djangoapps/instructor_task/tests/test_base.py new file mode 100644 index 0000000000..5e51b9fdeb --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/test_base.py @@ -0,0 +1,211 @@ +""" +Base test classes for LMS instructor-initiated background tasks + +""" +import json +from uuid import uuid4 +from mock import Mock + +from celery.states import SUCCESS, FAILURE + +from django.test.testcases import TestCase +from django.contrib.auth.models import User +from django.test.utils import override_settings + +from capa.tests.response_xml_factory import OptionResponseXMLFactory +from xmodule.modulestore.django import modulestore +from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory +from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase + +from student.tests.factories import CourseEnrollmentFactory, UserFactory +from courseware.model_data import StudentModule +from courseware.tests.tests import LoginEnrollmentTestCase, TEST_DATA_MONGO_MODULESTORE + +from instructor_task.api_helper import encode_problem_and_student_input +from instructor_task.models import PROGRESS, QUEUING +from instructor_task.tests.factories import InstructorTaskFactory +from instructor_task.views import instructor_task_status + + +TEST_COURSE_ORG = 'edx' +TEST_COURSE_NAME = 'Test Course' +TEST_COURSE_NUMBER = '1.23x' +TEST_SECTION_NAME = "Problem" +TEST_COURSE_ID = 'edx/1.23x/test_course' + +TEST_FAILURE_MESSAGE = 'task failed horribly' +TEST_FAILURE_EXCEPTION = 'RandomCauseError' + +OPTION_1 = 'Option 1' +OPTION_2 = 'Option 2' + + +class InstructorTaskTestCase(TestCase): + """ + Tests API and view methods that involve the reporting of status for background tasks. + """ + def setUp(self): + self.student = UserFactory.create(username="student", email="student@edx.org") + self.instructor = UserFactory.create(username="instructor", email="instructor@edx.org") + self.problem_url = InstructorTaskTestCase.problem_location("test_urlname") + + @staticmethod + def problem_location(problem_url_name): + """ + Create an internal location for a test problem. + """ + return "i4x://{org}/{number}/problem/{problem_url_name}".format(org='edx', + number='1.23x', + problem_url_name=problem_url_name) + + def _create_entry(self, task_state=QUEUING, task_output=None, student=None): + """Creates a InstructorTask entry for testing.""" + task_id = str(uuid4()) + progress_json = json.dumps(task_output) if task_output is not None else None + task_input, task_key = encode_problem_and_student_input(self.problem_url, student) + + instructor_task = InstructorTaskFactory.create(course_id=TEST_COURSE_ID, + requester=self.instructor, + task_input=json.dumps(task_input), + task_key=task_key, + task_id=task_id, + task_state=task_state, + task_output=progress_json) + return instructor_task + + def _create_failure_entry(self): + """Creates a InstructorTask entry representing a failed task.""" + # view task entry for task failure + progress = {'message': TEST_FAILURE_MESSAGE, + 'exception': TEST_FAILURE_EXCEPTION, + } + return self._create_entry(task_state=FAILURE, task_output=progress) + + def _create_success_entry(self, student=None): + """Creates a InstructorTask entry representing a successful task.""" + return self._create_progress_entry(student, task_state=SUCCESS) + + def _create_progress_entry(self, student=None, task_state=PROGRESS): + """Creates a InstructorTask entry representing a task in progress.""" + progress = {'attempted': 3, + 'updated': 2, + 'total': 5, + 'action_name': 'rescored', + } + return self._create_entry(task_state=task_state, task_output=progress, student=student) + + +@override_settings(MODULESTORE=TEST_DATA_MONGO_MODULESTORE) +class InstructorTaskModuleTestCase(LoginEnrollmentTestCase, ModuleStoreTestCase): + """ + Base test class for InstructorTask-related tests that require + the setup of a course and problem in order to access StudentModule state. + """ + course = None + current_user = None + + def initialize_course(self): + """Create a course in the store, with a chapter and section.""" + self.module_store = modulestore() + + # Create the course + self.course = CourseFactory.create(org=TEST_COURSE_ORG, + number=TEST_COURSE_NUMBER, + display_name=TEST_COURSE_NAME) + + # Add a chapter to the course + chapter = ItemFactory.create(parent_location=self.course.location, + display_name=TEST_SECTION_NAME) + + # add a sequence to the course to which the problems can be added + self.problem_section = ItemFactory.create(parent_location=chapter.location, + template='i4x://edx/templates/sequential/Empty', + display_name=TEST_SECTION_NAME) + + @staticmethod + def get_user_email(username): + """Generate email address based on username""" + return '{0}@test.com'.format(username) + + def login_username(self, username): + """Login the user, given the `username`.""" + if self.current_user != username: + self.login(InstructorTaskModuleTestCase.get_user_email(username), "test") + self.current_user = username + + def _create_user(self, username, is_staff=False): + """Creates a user and enrolls them in the test course.""" + email = InstructorTaskModuleTestCase.get_user_email(username) + thisuser = UserFactory.create(username=username, email=email, is_staff=is_staff) + CourseEnrollmentFactory.create(user=thisuser, course_id=self.course.id) + return thisuser + + def create_instructor(self, username): + """Creates an instructor for the test course.""" + return self._create_user(username, is_staff=True) + + def create_student(self, username): + """Creates a student for the test course.""" + return self._create_user(username, is_staff=False) + + @staticmethod + def problem_location(problem_url_name): + """ + Create an internal location for a test problem. + """ + if "i4x:" in problem_url_name: + return problem_url_name + else: + return "i4x://{org}/{number}/problem/{problem_url_name}".format(org=TEST_COURSE_ORG, + number=TEST_COURSE_NUMBER, + problem_url_name=problem_url_name) + + def define_option_problem(self, problem_url_name): + """Create the problem definition so the answer is Option 1""" + factory = OptionResponseXMLFactory() + factory_args = {'question_text': 'The correct answer is {0}'.format(OPTION_1), + 'options': [OPTION_1, OPTION_2], + 'correct_option': OPTION_1, + 'num_responses': 2} + problem_xml = factory.build_xml(**factory_args) + ItemFactory.create(parent_location=self.problem_section.location, + template="i4x://edx/templates/problem/Blank_Common_Problem", + display_name=str(problem_url_name), + data=problem_xml) + + def redefine_option_problem(self, problem_url_name): + """Change the problem definition so the answer is Option 2""" + factory = OptionResponseXMLFactory() + factory_args = {'question_text': 'The correct answer is {0}'.format(OPTION_2), + 'options': [OPTION_1, OPTION_2], + 'correct_option': OPTION_2, + 'num_responses': 2} + problem_xml = factory.build_xml(**factory_args) + location = InstructorTaskTestCase.problem_location(problem_url_name) + self.module_store.update_item(location, problem_xml) + + def get_student_module(self, username, descriptor): + """Get StudentModule object for test course, given the `username` and the problem's `descriptor`.""" + return StudentModule.objects.get(course_id=self.course.id, + student=User.objects.get(username=username), + module_type=descriptor.location.category, + module_state_key=descriptor.location.url(), + ) + + @staticmethod + def get_task_status(task_id): + """Use api method to fetch task status, using mock request.""" + mock_request = Mock() + mock_request.REQUEST = {'task_id': task_id} + response = instructor_task_status(mock_request) + status = json.loads(response.content) + return status + + def create_task_request(self, requester_username): + """Generate request that can be used for submitting tasks""" + request = Mock() + request.user = User.objects.get(username=requester_username) + request.get_host = Mock(return_value="testhost") + request.META = {'REMOTE_ADDR': '0:0:0:0', 'SERVER_NAME': 'testhost'} + request.is_secure = Mock(return_value=False) + return request diff --git a/lms/djangoapps/instructor_task/tests/test_integration.py b/lms/djangoapps/instructor_task/tests/test_integration.py new file mode 100644 index 0000000000..d7a81a5b39 --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/test_integration.py @@ -0,0 +1,475 @@ +""" +Integration Tests for LMS instructor-initiated background tasks + +Runs tasks on answers to course problems to validate that code +paths actually work. + +""" +import logging +import json +from mock import patch +import textwrap + +from celery.states import SUCCESS, FAILURE +from django.contrib.auth.models import User +from django.core.urlresolvers import reverse + +from capa.tests.response_xml_factory import (CodeResponseXMLFactory, + CustomResponseXMLFactory) +from xmodule.modulestore.tests.factories import ItemFactory +from xmodule.modulestore.exceptions import ItemNotFoundError + +from courseware.model_data import StudentModule + +from instructor_task.api import (submit_rescore_problem_for_all_students, + submit_rescore_problem_for_student, + submit_reset_problem_attempts_for_all_students, + submit_delete_problem_state_for_all_students) +from instructor_task.models import InstructorTask +from instructor_task.tests.test_base import (InstructorTaskModuleTestCase, TEST_COURSE_ORG, TEST_COURSE_NUMBER, + OPTION_1, OPTION_2) +from capa.responsetypes import StudentInputError + + +log = logging.getLogger(__name__) + + +class TestIntegrationTask(InstructorTaskModuleTestCase): + """ + Base class to provide general methods used for "integration" testing of particular tasks. + """ + + def submit_student_answer(self, username, problem_url_name, responses): + """ + Use ajax interface to submit a student answer. + + Assumes the input list of responses has two values. + """ + def get_input_id(response_id): + """Creates input id using information about the test course and the current problem.""" + # Note that this is a capa-specific convention. The form is a version of the problem's + # URL, modified so that it can be easily stored in html, prepended with "input-" and + # appended with a sequence identifier for the particular response the input goes to. + return 'input_i4x-{0}-{1}-problem-{2}_{3}'.format(TEST_COURSE_ORG.lower(), + TEST_COURSE_NUMBER.replace('.', '_'), + problem_url_name, response_id) + + # make sure that the requested user is logged in, so that the ajax call works + # on the right problem: + self.login_username(username) + # make ajax call: + modx_url = reverse('modx_dispatch', + kwargs={'course_id': self.course.id, + 'location': InstructorTaskModuleTestCase.problem_location(problem_url_name), + 'dispatch': 'problem_check', }) + + # we assume we have two responses, so assign them the correct identifiers. + resp = self.client.post(modx_url, { + get_input_id('2_1'): responses[0], + get_input_id('3_1'): responses[1], + }) + return resp + + def _assert_task_failure(self, entry_id, task_type, problem_url_name, expected_message): + """Confirm that expected values are stored in InstructorTask on task failure.""" + instructor_task = InstructorTask.objects.get(id=entry_id) + self.assertEqual(instructor_task.task_state, FAILURE) + self.assertEqual(instructor_task.requester.username, 'instructor') + self.assertEqual(instructor_task.task_type, task_type) + task_input = json.loads(instructor_task.task_input) + self.assertFalse('student' in task_input) + self.assertEqual(task_input['problem_url'], InstructorTaskModuleTestCase.problem_location(problem_url_name)) + status = json.loads(instructor_task.task_output) + self.assertEqual(status['exception'], 'ZeroDivisionError') + self.assertEqual(status['message'], expected_message) + # check status returned: + status = InstructorTaskModuleTestCase.get_task_status(instructor_task.task_id) + self.assertEqual(status['message'], expected_message) + + +class TestRescoringTask(TestIntegrationTask): + """ + Integration-style tests for rescoring problems in a background task. + + Exercises real problems with a minimum of patching. + """ + + def setUp(self): + self.initialize_course() + self.create_instructor('instructor') + self.create_student('u1') + self.create_student('u2') + self.create_student('u3') + self.create_student('u4') + self.logout() + + def render_problem(self, username, problem_url_name): + """ + Use ajax interface to request html for a problem. + """ + # make sure that the requested user is logged in, so that the ajax call works + # on the right problem: + self.login_username(username) + # make ajax call: + modx_url = reverse('modx_dispatch', + kwargs={'course_id': self.course.id, + 'location': InstructorTaskModuleTestCase.problem_location(problem_url_name), + 'dispatch': 'problem_get', }) + resp = self.client.post(modx_url, {}) + return resp + + def check_state(self, username, descriptor, expected_score, expected_max_score, expected_attempts): + """ + Check that the StudentModule state contains the expected values. + + The student module is found for the test course, given the `username` and problem `descriptor`. + + Values checked include the number of attempts, the score, and the max score for a problem. + """ + module = self.get_student_module(username, descriptor) + self.assertEqual(module.grade, expected_score) + self.assertEqual(module.max_grade, expected_max_score) + state = json.loads(module.state) + attempts = state['attempts'] + self.assertEqual(attempts, expected_attempts) + if attempts > 0: + self.assertTrue('correct_map' in state) + self.assertTrue('student_answers' in state) + self.assertGreater(len(state['correct_map']), 0) + self.assertGreater(len(state['student_answers']), 0) + + def submit_rescore_all_student_answers(self, instructor, problem_url_name): + """Submits the particular problem for rescoring""" + return submit_rescore_problem_for_all_students(self.create_task_request(instructor), self.course.id, + InstructorTaskModuleTestCase.problem_location(problem_url_name)) + + def submit_rescore_one_student_answer(self, instructor, problem_url_name, student): + """Submits the particular problem for rescoring for a particular student""" + return submit_rescore_problem_for_student(self.create_task_request(instructor), self.course.id, + InstructorTaskModuleTestCase.problem_location(problem_url_name), + student) + + def test_rescoring_option_problem(self): + """Run rescore scenario on option problem""" + # get descriptor: + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + descriptor = self.module_store.get_instance(self.course.id, location) + + # first store answers for each of the separate users: + self.submit_student_answer('u1', problem_url_name, [OPTION_1, OPTION_1]) + self.submit_student_answer('u2', problem_url_name, [OPTION_1, OPTION_2]) + self.submit_student_answer('u3', problem_url_name, [OPTION_2, OPTION_1]) + self.submit_student_answer('u4', problem_url_name, [OPTION_2, OPTION_2]) + + self.check_state('u1', descriptor, 2, 2, 1) + self.check_state('u2', descriptor, 1, 2, 1) + self.check_state('u3', descriptor, 1, 2, 1) + self.check_state('u4', descriptor, 0, 2, 1) + + # update the data in the problem definition + self.redefine_option_problem(problem_url_name) + # confirm that simply rendering the problem again does not result in a change + # in the grade: + self.render_problem('u1', problem_url_name) + self.check_state('u1', descriptor, 2, 2, 1) + + # rescore the problem for only one student -- only that student's grade should change: + self.submit_rescore_one_student_answer('instructor', problem_url_name, User.objects.get(username='u1')) + self.check_state('u1', descriptor, 0, 2, 1) + self.check_state('u2', descriptor, 1, 2, 1) + self.check_state('u3', descriptor, 1, 2, 1) + self.check_state('u4', descriptor, 0, 2, 1) + + # rescore the problem for all students + self.submit_rescore_all_student_answers('instructor', problem_url_name) + self.check_state('u1', descriptor, 0, 2, 1) + self.check_state('u2', descriptor, 1, 2, 1) + self.check_state('u3', descriptor, 1, 2, 1) + self.check_state('u4', descriptor, 2, 2, 1) + + def test_rescoring_failure(self): + """Simulate a failure in rescoring a problem""" + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + self.submit_student_answer('u1', problem_url_name, [OPTION_1, OPTION_1]) + + expected_message = "bad things happened" + with patch('capa.capa_problem.LoncapaProblem.rescore_existing_answers') as mock_rescore: + mock_rescore.side_effect = ZeroDivisionError(expected_message) + instructor_task = self.submit_rescore_all_student_answers('instructor', problem_url_name) + self._assert_task_failure(instructor_task.id, 'rescore_problem', problem_url_name, expected_message) + + def test_rescoring_bad_unicode_input(self): + """Generate a real failure in rescoring a problem, with an answer including unicode""" + # At one point, the student answers that resulted in StudentInputErrors were being + # persisted (even though they were not counted as an attempt). That is not possible + # now, so it's harder to generate a test for how such input is handled. + problem_url_name = 'H1P1' + # set up an option problem -- doesn't matter really what problem it is, but we need + # it to have an answer. + self.define_option_problem(problem_url_name) + self.submit_student_answer('u1', problem_url_name, [OPTION_1, OPTION_1]) + + # return an input error as if it were a numerical response, with an embedded unicode character: + expected_message = u"Could not interpret '2/3\u03a9' as a number" + with patch('capa.capa_problem.LoncapaProblem.rescore_existing_answers') as mock_rescore: + mock_rescore.side_effect = StudentInputError(expected_message) + instructor_task = self.submit_rescore_all_student_answers('instructor', problem_url_name) + + # check instructor_task returned + instructor_task = InstructorTask.objects.get(id=instructor_task.id) + self.assertEqual(instructor_task.task_state, 'SUCCESS') + self.assertEqual(instructor_task.requester.username, 'instructor') + self.assertEqual(instructor_task.task_type, 'rescore_problem') + task_input = json.loads(instructor_task.task_input) + self.assertFalse('student' in task_input) + self.assertEqual(task_input['problem_url'], InstructorTaskModuleTestCase.problem_location(problem_url_name)) + status = json.loads(instructor_task.task_output) + self.assertEqual(status['attempted'], 1) + self.assertEqual(status['updated'], 0) + self.assertEqual(status['total'], 1) + + def define_code_response_problem(self, problem_url_name): + """ + Define an arbitrary code-response problem. + + We'll end up mocking its evaluation later. + """ + factory = CodeResponseXMLFactory() + grader_payload = json.dumps({"grader": "ps04/grade_square.py"}) + problem_xml = factory.build_xml(initial_display="def square(x):", + answer_display="answer", + grader_payload=grader_payload, + num_responses=2) + ItemFactory.create(parent_location=self.problem_section.location, + template="i4x://edx/templates/problem/Blank_Common_Problem", + display_name=str(problem_url_name), + data=problem_xml) + + def test_rescoring_code_problem(self): + """Run rescore scenario on problem with code submission""" + problem_url_name = 'H1P2' + self.define_code_response_problem(problem_url_name) + # we fully create the CodeResponse problem, but just pretend that we're queuing it: + with patch('capa.xqueue_interface.XQueueInterface.send_to_queue') as mock_send_to_queue: + mock_send_to_queue.return_value = (0, "Successfully queued") + self.submit_student_answer('u1', problem_url_name, ["answer1", "answer2"]) + + instructor_task = self.submit_rescore_all_student_answers('instructor', problem_url_name) + + instructor_task = InstructorTask.objects.get(id=instructor_task.id) + self.assertEqual(instructor_task.task_state, FAILURE) + status = json.loads(instructor_task.task_output) + self.assertEqual(status['exception'], 'NotImplementedError') + self.assertEqual(status['message'], "Problem's definition does not support rescoring") + + status = InstructorTaskModuleTestCase.get_task_status(instructor_task.task_id) + self.assertEqual(status['message'], "Problem's definition does not support rescoring") + + def define_randomized_custom_response_problem(self, problem_url_name, redefine=False): + """ + Defines a custom response problem that uses a random value to determine correctness. + + Generated answer is also returned as the `msg`, so that the value can be used as a + correct answer by a test. + + If the `redefine` flag is set, then change the definition of correctness (from equals + to not-equals). + """ + factory = CustomResponseXMLFactory() + script = textwrap.dedent(""" + def check_func(expect, answer_given): + expected = str(random.randint(0, 100)) + return {'ok': answer_given %s expected, 'msg': expected} + """ % ('!=' if redefine else '==')) + problem_xml = factory.build_xml(script=script, cfn="check_func", expect="42", num_responses=1) + if redefine: + self.module_store.update_item(InstructorTaskModuleTestCase.problem_location(problem_url_name), problem_xml) + else: + # Use "per-student" rerandomization so that check-problem can be called more than once. + # Using "always" means we cannot check a problem twice, but we want to call once to get the + # correct answer, and call a second time with that answer to confirm it's graded as correct. + # Per-student rerandomization will at least generate different seeds for different users, so + # we get a little more test coverage. + ItemFactory.create(parent_location=self.problem_section.location, + template="i4x://edx/templates/problem/Blank_Common_Problem", + display_name=str(problem_url_name), + data=problem_xml, + metadata={"rerandomize": "per_student"}) + + def test_rescoring_randomized_problem(self): + """Run rescore scenario on custom problem that uses randomize""" + # First define the custom response problem: + problem_url_name = 'H1P1' + self.define_randomized_custom_response_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + descriptor = self.module_store.get_instance(self.course.id, location) + # run with more than one user + userlist = ['u1', 'u2', 'u3', 'u4'] + for username in userlist: + # first render the problem, so that a seed will be created for this user + self.render_problem(username, problem_url_name) + # submit a bogus answer, in order to get the problem to tell us its real answer + dummy_answer = "1000" + self.submit_student_answer(username, problem_url_name, [dummy_answer, dummy_answer]) + # we should have gotten the problem wrong, since we're way out of range: + self.check_state(username, descriptor, 0, 1, 1) + # dig the correct answer out of the problem's message + module = self.get_student_module(username, descriptor) + state = json.loads(module.state) + correct_map = state['correct_map'] + log.info("Correct Map: %s", correct_map) + # only one response, so pull it out: + answer = correct_map.values()[0]['msg'] + self.submit_student_answer(username, problem_url_name, [answer, answer]) + # we should now get the problem right, with a second attempt: + self.check_state(username, descriptor, 1, 1, 2) + + # redefine the problem (as stored in Mongo) so that the definition of correct changes + self.define_randomized_custom_response_problem(problem_url_name, redefine=True) + # confirm that simply rendering the problem again does not result in a change + # in the grade (or the attempts): + self.render_problem('u1', problem_url_name) + self.check_state('u1', descriptor, 1, 1, 2) + + # rescore the problem for only one student -- only that student's grade should change + # (and none of the attempts): + self.submit_rescore_one_student_answer('instructor', problem_url_name, User.objects.get(username='u1')) + for username in userlist: + self.check_state(username, descriptor, 0 if username == 'u1' else 1, 1, 2) + + # rescore the problem for all students + self.submit_rescore_all_student_answers('instructor', problem_url_name) + + # all grades should change to being wrong (with no change in attempts) + for username in userlist: + self.check_state(username, descriptor, 0, 1, 2) + + +class TestResetAttemptsTask(TestIntegrationTask): + """ + Integration-style tests for resetting problem attempts in a background task. + + Exercises real problems with a minimum of patching. + """ + userlist = ['u1', 'u2', 'u3', 'u4'] + + def setUp(self): + self.initialize_course() + self.create_instructor('instructor') + for username in self.userlist: + self.create_student(username) + self.logout() + + def get_num_attempts(self, username, descriptor): + """returns number of attempts stored for `username` on problem `descriptor` for test course""" + module = self.get_student_module(username, descriptor) + state = json.loads(module.state) + return state['attempts'] + + def reset_problem_attempts(self, instructor, problem_url_name): + """Submits the current problem for resetting""" + return submit_reset_problem_attempts_for_all_students(self.create_task_request(instructor), self.course.id, + InstructorTaskModuleTestCase.problem_location(problem_url_name)) + + def test_reset_attempts_on_problem(self): + """Run reset-attempts scenario on option problem""" + # get descriptor: + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + descriptor = self.module_store.get_instance(self.course.id, location) + num_attempts = 3 + # first store answers for each of the separate users: + for _ in range(num_attempts): + for username in self.userlist: + self.submit_student_answer(username, problem_url_name, [OPTION_1, OPTION_1]) + + for username in self.userlist: + self.assertEquals(self.get_num_attempts(username, descriptor), num_attempts) + + self.reset_problem_attempts('instructor', problem_url_name) + + for username in self.userlist: + self.assertEquals(self.get_num_attempts(username, descriptor), 0) + + def test_reset_failure(self): + """Simulate a failure in resetting attempts on a problem""" + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + self.submit_student_answer('u1', problem_url_name, [OPTION_1, OPTION_1]) + + expected_message = "bad things happened" + with patch('courseware.models.StudentModule.save') as mock_save: + mock_save.side_effect = ZeroDivisionError(expected_message) + instructor_task = self.reset_problem_attempts('instructor', problem_url_name) + self._assert_task_failure(instructor_task.id, 'reset_problem_attempts', problem_url_name, expected_message) + + def test_reset_non_problem(self): + """confirm that a non-problem can still be successfully reset""" + problem_url_name = self.problem_section.location.url() + instructor_task = self.reset_problem_attempts('instructor', problem_url_name) + instructor_task = InstructorTask.objects.get(id=instructor_task.id) + self.assertEqual(instructor_task.task_state, SUCCESS) + + +class TestDeleteProblemTask(TestIntegrationTask): + """ + Integration-style tests for deleting problem state in a background task. + + Exercises real problems with a minimum of patching. + """ + userlist = ['u1', 'u2', 'u3', 'u4'] + + def setUp(self): + self.initialize_course() + self.create_instructor('instructor') + for username in self.userlist: + self.create_student(username) + self.logout() + + def delete_problem_state(self, instructor, problem_url_name): + """Submits the current problem for deletion""" + return submit_delete_problem_state_for_all_students(self.create_task_request(instructor), self.course.id, + InstructorTaskModuleTestCase.problem_location(problem_url_name)) + + def test_delete_problem_state(self): + """Run delete-state scenario on option problem""" + # get descriptor: + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + location = InstructorTaskModuleTestCase.problem_location(problem_url_name) + descriptor = self.module_store.get_instance(self.course.id, location) + # first store answers for each of the separate users: + for username in self.userlist: + self.submit_student_answer(username, problem_url_name, [OPTION_1, OPTION_1]) + # confirm that state exists: + for username in self.userlist: + self.assertTrue(self.get_student_module(username, descriptor) is not None) + # run delete task: + self.delete_problem_state('instructor', problem_url_name) + # confirm that no state can be found: + for username in self.userlist: + with self.assertRaises(StudentModule.DoesNotExist): + self.get_student_module(username, descriptor) + + def test_delete_failure(self): + """Simulate a failure in deleting state of a problem""" + problem_url_name = 'H1P1' + self.define_option_problem(problem_url_name) + self.submit_student_answer('u1', problem_url_name, [OPTION_1, OPTION_1]) + + expected_message = "bad things happened" + with patch('courseware.models.StudentModule.delete') as mock_delete: + mock_delete.side_effect = ZeroDivisionError(expected_message) + instructor_task = self.delete_problem_state('instructor', problem_url_name) + self._assert_task_failure(instructor_task.id, 'delete_problem_state', problem_url_name, expected_message) + + def test_delete_non_problem(self): + """confirm that a non-problem can still be successfully deleted""" + problem_url_name = self.problem_section.location.url() + instructor_task = self.delete_problem_state('instructor', problem_url_name) + instructor_task = InstructorTask.objects.get(id=instructor_task.id) + self.assertEqual(instructor_task.task_state, SUCCESS) diff --git a/lms/djangoapps/instructor_task/tests/test_tasks.py b/lms/djangoapps/instructor_task/tests/test_tasks.py new file mode 100644 index 0000000000..9eb81a98c9 --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/test_tasks.py @@ -0,0 +1,332 @@ +""" +Unit tests for LMS instructor-initiated background tasks, + +Runs tasks on answers to course problems to validate that code +paths actually work. + +""" +import json +from uuid import uuid4 + +from mock import Mock, patch + +from celery.states import SUCCESS, FAILURE + +from xmodule.modulestore.exceptions import ItemNotFoundError + +from courseware.model_data import StudentModule +from courseware.tests.factories import StudentModuleFactory +from student.tests.factories import UserFactory + +from instructor_task.models import InstructorTask +from instructor_task.tests.test_base import InstructorTaskModuleTestCase, TEST_COURSE_ORG, TEST_COURSE_NUMBER +from instructor_task.tests.factories import InstructorTaskFactory +from instructor_task.tasks import rescore_problem, reset_problem_attempts, delete_problem_state +from instructor_task.tasks_helper import UpdateProblemModuleStateError, update_problem_module_state + + +PROBLEM_URL_NAME = "test_urlname" + + +class TestTaskFailure(Exception): + pass + + +class TestInstructorTasks(InstructorTaskModuleTestCase): + def setUp(self): + super(InstructorTaskModuleTestCase, self).setUp() + self.initialize_course() + self.instructor = self.create_instructor('instructor') + self.problem_url = InstructorTaskModuleTestCase.problem_location(PROBLEM_URL_NAME) + + def _create_input_entry(self, student_ident=None): + """Creates a InstructorTask entry for testing.""" + task_id = str(uuid4()) + task_input = {'problem_url': self.problem_url} + if student_ident is not None: + task_input['student'] = student_ident + + instructor_task = InstructorTaskFactory.create(course_id=self.course.id, + requester=self.instructor, + task_input=json.dumps(task_input), + task_key='dummy value', + task_id=task_id) + return instructor_task + + def _get_xmodule_instance_args(self): + """ + Calculate dummy values for parameters needed for instantiating xmodule instances. + """ + return {'xqueue_callback_url_prefix': 'dummy_value', + 'request_info': {}, + } + + def _run_task_with_mock_celery(self, task_function, entry_id, task_id, expected_failure_message=None): + self.current_task = Mock() + self.current_task.request = Mock() + self.current_task.request.id = task_id + self.current_task.update_state = Mock() + if expected_failure_message is not None: + self.current_task.update_state.side_effect = TestTaskFailure(expected_failure_message) + with patch('instructor_task.tasks_helper._get_current_task') as mock_get_task: + mock_get_task.return_value = self.current_task + return task_function(entry_id, self._get_xmodule_instance_args()) + + def _test_missing_current_task(self, task_function): + # run without (mock) Celery running + task_entry = self._create_input_entry() + with self.assertRaises(UpdateProblemModuleStateError): + task_function(task_entry.id, self._get_xmodule_instance_args()) + + def test_rescore_missing_current_task(self): + self._test_missing_current_task(rescore_problem) + + def test_reset_missing_current_task(self): + self._test_missing_current_task(reset_problem_attempts) + + def test_delete_missing_current_task(self): + self._test_missing_current_task(delete_problem_state) + + def _test_undefined_problem(self, task_function): + # run with celery, but no problem defined + task_entry = self._create_input_entry() + with self.assertRaises(ItemNotFoundError): + self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id) + + def test_rescore_undefined_problem(self): + self._test_undefined_problem(rescore_problem) + + def test_reset_undefined_problem(self): + self._test_undefined_problem(reset_problem_attempts) + + def test_delete_undefined_problem(self): + self._test_undefined_problem(delete_problem_state) + + def _test_run_with_task(self, task_function, action_name, expected_num_updated): + # run with some StudentModules for the problem + task_entry = self._create_input_entry() + status = self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id) + # check return value + self.assertEquals(status.get('attempted'), expected_num_updated) + self.assertEquals(status.get('updated'), expected_num_updated) + self.assertEquals(status.get('total'), expected_num_updated) + self.assertEquals(status.get('action_name'), action_name) + self.assertGreater('duration_ms', 0) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(json.loads(entry.task_output), status) + self.assertEquals(entry.task_state, SUCCESS) + + def _test_run_with_no_state(self, task_function, action_name): + # run with no StudentModules for the problem + self.define_option_problem(PROBLEM_URL_NAME) + self._test_run_with_task(task_function, action_name, 0) + + def test_rescore_with_no_state(self): + self._test_run_with_no_state(rescore_problem, 'rescored') + + def test_reset_with_no_state(self): + self._test_run_with_no_state(reset_problem_attempts, 'reset') + + def test_delete_with_no_state(self): + self._test_run_with_no_state(delete_problem_state, 'deleted') + + def _create_students_with_state(self, num_students, state=None): + """Create students, a problem, and StudentModule objects for testing""" + self.define_option_problem(PROBLEM_URL_NAME) + students = [ + UserFactory.create(username='robot%d' % i, email='robot+test+%d@edx.org' % i) + for i in xrange(num_students) + ] + for student in students: + StudentModuleFactory.create(course_id=self.course.id, + module_state_key=self.problem_url, + student=student, + state=state) + return students + + def _assert_num_attempts(self, students, num_attempts): + """Check the number attempts for all students is the same""" + for student in students: + module = StudentModule.objects.get(course_id=self.course.id, + student=student, + module_state_key=self.problem_url) + state = json.loads(module.state) + self.assertEquals(state['attempts'], num_attempts) + + def test_reset_with_some_state(self): + initial_attempts = 3 + input_state = json.dumps({'attempts': initial_attempts}) + num_students = 10 + students = self._create_students_with_state(num_students, input_state) + # check that entries were set correctly + self._assert_num_attempts(students, initial_attempts) + # run the task + self._test_run_with_task(reset_problem_attempts, 'reset', num_students) + # check that entries were reset + self._assert_num_attempts(students, 0) + + def test_delete_with_some_state(self): + # This will create StudentModule entries -- we don't have to worry about + # the state inside them. + num_students = 10 + students = self._create_students_with_state(num_students) + # check that entries were created correctly + for student in students: + StudentModule.objects.get(course_id=self.course.id, + student=student, + module_state_key=self.problem_url) + self._test_run_with_task(delete_problem_state, 'deleted', num_students) + # confirm that no state can be found anymore: + for student in students: + with self.assertRaises(StudentModule.DoesNotExist): + StudentModule.objects.get(course_id=self.course.id, + student=student, + module_state_key=self.problem_url) + + def _test_reset_with_student(self, use_email): + # run with some StudentModules for the problem + num_students = 10 + initial_attempts = 3 + input_state = json.dumps({'attempts': initial_attempts}) + students = self._create_students_with_state(num_students, input_state) + # check that entries were set correctly + for student in students: + module = StudentModule.objects.get(course_id=self.course.id, + student=student, + module_state_key=self.problem_url) + state = json.loads(module.state) + self.assertEquals(state['attempts'], initial_attempts) + + if use_email: + student_ident = students[3].email + else: + student_ident = students[3].username + task_entry = self._create_input_entry(student_ident) + + status = self._run_task_with_mock_celery(reset_problem_attempts, task_entry.id, task_entry.task_id) + # check return value + self.assertEquals(status.get('attempted'), 1) + self.assertEquals(status.get('updated'), 1) + self.assertEquals(status.get('total'), 1) + self.assertEquals(status.get('action_name'), 'reset') + self.assertGreater('duration_ms', 0) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(json.loads(entry.task_output), status) + self.assertEquals(entry.task_state, SUCCESS) + # check that the correct entry was reset + for index, student in enumerate(students): + module = StudentModule.objects.get(course_id=self.course.id, + student=student, + module_state_key=self.problem_url) + state = json.loads(module.state) + if index == 3: + self.assertEquals(state['attempts'], 0) + else: + self.assertEquals(state['attempts'], initial_attempts) + + def test_reset_with_student_username(self): + self._test_reset_with_student(False) + + def test_reset_with_student_email(self): + self._test_reset_with_student(True) + + def _test_run_with_failure(self, task_function, expected_message): + # run with no StudentModules for the problem, + # because we will fail before entering the loop. + task_entry = self._create_input_entry() + self.define_option_problem(PROBLEM_URL_NAME) + with self.assertRaises(TestTaskFailure): + self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id, expected_message) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(entry.task_state, FAILURE) + output = json.loads(entry.task_output) + self.assertEquals(output['exception'], 'TestTaskFailure') + self.assertEquals(output['message'], expected_message) + + def test_rescore_with_failure(self): + self._test_run_with_failure(rescore_problem, 'We expected this to fail') + + def test_reset_with_failure(self): + self._test_run_with_failure(reset_problem_attempts, 'We expected this to fail') + + def test_delete_with_failure(self): + self._test_run_with_failure(delete_problem_state, 'We expected this to fail') + + def _test_run_with_long_error_msg(self, task_function): + # run with an error message that is so long it will require + # truncation (as well as the jettisoning of the traceback). + task_entry = self._create_input_entry() + self.define_option_problem(PROBLEM_URL_NAME) + expected_message = "x" * 1500 + with self.assertRaises(TestTaskFailure): + self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id, expected_message) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(entry.task_state, FAILURE) + self.assertGreater(1023, len(entry.task_output)) + output = json.loads(entry.task_output) + self.assertEquals(output['exception'], 'TestTaskFailure') + self.assertEquals(output['message'], expected_message[:len(output['message']) - 3] + "...") + self.assertTrue('traceback' not in output) + + def test_rescore_with_long_error_msg(self): + self._test_run_with_long_error_msg(rescore_problem) + + def test_reset_with_long_error_msg(self): + self._test_run_with_long_error_msg(reset_problem_attempts) + + def test_delete_with_long_error_msg(self): + self._test_run_with_long_error_msg(delete_problem_state) + + def _test_run_with_short_error_msg(self, task_function): + # run with an error message that is short enough to fit + # in the output, but long enough that the traceback won't. + # Confirm that the traceback is truncated. + task_entry = self._create_input_entry() + self.define_option_problem(PROBLEM_URL_NAME) + expected_message = "x" * 900 + with self.assertRaises(TestTaskFailure): + self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id, expected_message) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(entry.task_state, FAILURE) + self.assertGreater(1023, len(entry.task_output)) + output = json.loads(entry.task_output) + self.assertEquals(output['exception'], 'TestTaskFailure') + self.assertEquals(output['message'], expected_message) + self.assertEquals(output['traceback'][-3:], "...") + + def test_rescore_with_short_error_msg(self): + self._test_run_with_short_error_msg(rescore_problem) + + def test_reset_with_short_error_msg(self): + self._test_run_with_short_error_msg(reset_problem_attempts) + + def test_delete_with_short_error_msg(self): + self._test_run_with_short_error_msg(delete_problem_state) + + def test_successful_result_too_long(self): + # while we don't expect the existing tasks to generate output that is too + # long, we can test the framework will handle such an occurrence. + task_entry = self._create_input_entry() + self.define_option_problem(PROBLEM_URL_NAME) + action_name = 'x' * 1000 + update_fcn = lambda(_module_descriptor, _student_module, _xmodule_instance_args): True + task_function = (lambda entry_id, xmodule_instance_args: + update_problem_module_state(entry_id, + update_fcn, action_name, filter_fcn=None, + xmodule_instance_args=None)) + + with self.assertRaises(ValueError): + self._run_task_with_mock_celery(task_function, task_entry.id, task_entry.task_id) + # compare with entry in table: + entry = InstructorTask.objects.get(id=task_entry.id) + self.assertEquals(entry.task_state, FAILURE) + self.assertGreater(1023, len(entry.task_output)) + output = json.loads(entry.task_output) + self.assertEquals(output['exception'], 'ValueError') + self.assertTrue("Length of task output is too long" in output['message']) + self.assertTrue('traceback' not in output) diff --git a/lms/djangoapps/instructor_task/tests/test_views.py b/lms/djangoapps/instructor_task/tests/test_views.py new file mode 100644 index 0000000000..9020bf6e60 --- /dev/null +++ b/lms/djangoapps/instructor_task/tests/test_views.py @@ -0,0 +1,266 @@ + +""" +Test for LMS instructor background task queue management +""" +import json +from celery.states import SUCCESS, FAILURE, REVOKED, PENDING + +from mock import Mock, patch + +from django.utils.datastructures import MultiValueDict + +from instructor_task.models import PROGRESS +from instructor_task.tests.test_base import (InstructorTaskTestCase, + TEST_FAILURE_MESSAGE, + TEST_FAILURE_EXCEPTION) +from instructor_task.views import instructor_task_status, get_task_completion_info + + +class InstructorTaskReportTest(InstructorTaskTestCase): + """ + Tests API and view methods that involve the reporting of status for background tasks. + """ + + def _get_instructor_task_status(self, task_id): + """Returns status corresponding to task_id via api method.""" + request = Mock() + request.REQUEST = {'task_id': task_id} + return instructor_task_status(request) + + def test_instructor_task_status(self): + instructor_task = self._create_failure_entry() + task_id = instructor_task.task_id + request = Mock() + request.REQUEST = {'task_id': task_id} + response = instructor_task_status(request) + output = json.loads(response.content) + self.assertEquals(output['task_id'], task_id) + + def test_missing_instructor_task_status(self): + task_id = "missing_id" + request = Mock() + request.REQUEST = {'task_id': task_id} + response = instructor_task_status(request) + output = json.loads(response.content) + self.assertEquals(output, {}) + + def test_instructor_task_status_list(self): + # Fetch status for existing tasks by arg list, as if called from ajax. + # Note that ajax does something funny with the marshalling of + # list data, so the key value has "[]" appended to it. + task_ids = [(self._create_failure_entry()).task_id for _ in range(1, 5)] + request = Mock() + request.REQUEST = MultiValueDict({'task_ids[]': task_ids}) + response = instructor_task_status(request) + output = json.loads(response.content) + self.assertEquals(len(output), len(task_ids)) + for task_id in task_ids: + self.assertEquals(output[task_id]['task_id'], task_id) + + def test_get_status_from_failure(self): + # get status for a task that has already failed + instructor_task = self._create_failure_entry() + task_id = instructor_task.task_id + response = self._get_instructor_task_status(task_id) + output = json.loads(response.content) + self.assertEquals(output['message'], TEST_FAILURE_MESSAGE) + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_id'], task_id) + self.assertEquals(output['task_state'], FAILURE) + self.assertFalse(output['in_progress']) + expected_progress = {'exception': TEST_FAILURE_EXCEPTION, + 'message': TEST_FAILURE_MESSAGE} + self.assertEquals(output['task_progress'], expected_progress) + + def test_get_status_from_success(self): + # get status for a task that has already succeeded + instructor_task = self._create_success_entry() + task_id = instructor_task.task_id + response = self._get_instructor_task_status(task_id) + output = json.loads(response.content) + self.assertEquals(output['message'], "Problem rescored for 2 of 3 students (out of 5)") + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_id'], task_id) + self.assertEquals(output['task_state'], SUCCESS) + self.assertFalse(output['in_progress']) + expected_progress = {'attempted': 3, + 'updated': 2, + 'total': 5, + 'action_name': 'rescored'} + self.assertEquals(output['task_progress'], expected_progress) + + def _test_get_status_from_result(self, task_id, mock_result): + """ + Provides mock result to caller of instructor_task_status, and returns resulting output. + """ + with patch('celery.result.AsyncResult.__new__') as mock_result_ctor: + mock_result_ctor.return_value = mock_result + response = self._get_instructor_task_status(task_id) + output = json.loads(response.content) + self.assertEquals(output['task_id'], task_id) + return output + + def test_get_status_to_pending(self): + # get status for a task that hasn't begun to run yet + instructor_task = self._create_entry() + task_id = instructor_task.task_id + mock_result = Mock() + mock_result.task_id = task_id + mock_result.state = PENDING + output = self._test_get_status_from_result(task_id, mock_result) + for key in ['message', 'succeeded', 'task_progress']: + self.assertTrue(key not in output) + self.assertEquals(output['task_state'], 'PENDING') + self.assertTrue(output['in_progress']) + + def test_update_progress_to_progress(self): + # view task entry for task in progress + instructor_task = self._create_progress_entry() + task_id = instructor_task.task_id + mock_result = Mock() + mock_result.task_id = task_id + mock_result.state = PROGRESS + mock_result.result = {'attempted': 5, + 'updated': 4, + 'total': 10, + 'action_name': 'rescored'} + output = self._test_get_status_from_result(task_id, mock_result) + self.assertEquals(output['message'], "Progress: rescored 4 of 5 so far (out of 10)") + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_state'], PROGRESS) + self.assertTrue(output['in_progress']) + self.assertEquals(output['task_progress'], mock_result.result) + + def test_update_progress_to_failure(self): + # view task entry for task in progress that later fails + instructor_task = self._create_progress_entry() + task_id = instructor_task.task_id + mock_result = Mock() + mock_result.task_id = task_id + mock_result.state = FAILURE + mock_result.result = NotImplementedError("This task later failed.") + mock_result.traceback = "random traceback" + output = self._test_get_status_from_result(task_id, mock_result) + self.assertEquals(output['message'], "This task later failed.") + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_state'], FAILURE) + self.assertFalse(output['in_progress']) + expected_progress = {'exception': 'NotImplementedError', + 'message': "This task later failed.", + 'traceback': "random traceback"} + self.assertEquals(output['task_progress'], expected_progress) + + def test_update_progress_to_revoked(self): + # view task entry for task in progress that later fails + instructor_task = self._create_progress_entry() + task_id = instructor_task.task_id + mock_result = Mock() + mock_result.task_id = task_id + mock_result.state = REVOKED + output = self._test_get_status_from_result(task_id, mock_result) + self.assertEquals(output['message'], "Task revoked before running") + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_state'], REVOKED) + self.assertFalse(output['in_progress']) + expected_progress = {'message': "Task revoked before running"} + self.assertEquals(output['task_progress'], expected_progress) + + def _get_output_for_task_success(self, attempted, updated, total, student=None): + """returns the task_id and the result returned by instructor_task_status().""" + # view task entry for task in progress + instructor_task = self._create_progress_entry(student) + task_id = instructor_task.task_id + mock_result = Mock() + mock_result.task_id = task_id + mock_result.state = SUCCESS + mock_result.result = {'attempted': attempted, + 'updated': updated, + 'total': total, + 'action_name': 'rescored'} + output = self._test_get_status_from_result(task_id, mock_result) + return output + + def test_update_progress_to_success(self): + output = self._get_output_for_task_success(10, 8, 10) + self.assertEquals(output['message'], "Problem rescored for 8 of 10 students") + self.assertEquals(output['succeeded'], False) + self.assertEquals(output['task_state'], SUCCESS) + self.assertFalse(output['in_progress']) + expected_progress = {'attempted': 10, + 'updated': 8, + 'total': 10, + 'action_name': 'rescored'} + self.assertEquals(output['task_progress'], expected_progress) + + def test_success_messages(self): + output = self._get_output_for_task_success(0, 0, 10) + self.assertEqual(output['message'], "Unable to find any students with submissions to be rescored (out of 10)") + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(10, 0, 10) + self.assertEqual(output['message'], "Problem failed to be rescored for any of 10 students") + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(10, 8, 10) + self.assertEqual(output['message'], "Problem rescored for 8 of 10 students") + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(9, 8, 10) + self.assertEqual(output['message'], "Problem rescored for 8 of 9 students (out of 10)") + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(10, 10, 10) + self.assertEqual(output['message'], "Problem successfully rescored for 10 students") + self.assertTrue(output['succeeded']) + + output = self._get_output_for_task_success(0, 0, 1, student=self.student) + self.assertTrue("Unable to find submission to be rescored for student" in output['message']) + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(1, 0, 1, student=self.student) + self.assertTrue("Problem failed to be rescored for student" in output['message']) + self.assertFalse(output['succeeded']) + + output = self._get_output_for_task_success(1, 1, 1, student=self.student) + self.assertTrue("Problem successfully rescored for student" in output['message']) + self.assertTrue(output['succeeded']) + + def test_get_info_for_queuing_task(self): + # get status for a task that is still running: + instructor_task = self._create_entry() + succeeded, message = get_task_completion_info(instructor_task) + self.assertFalse(succeeded) + self.assertEquals(message, "No status information available") + + def test_get_info_for_missing_output(self): + # check for missing task_output + instructor_task = self._create_success_entry() + instructor_task.task_output = None + succeeded, message = get_task_completion_info(instructor_task) + self.assertFalse(succeeded) + self.assertEquals(message, "No status information available") + + def test_get_info_for_broken_output(self): + # check for non-JSON task_output + instructor_task = self._create_success_entry() + instructor_task.task_output = "{ bad" + succeeded, message = get_task_completion_info(instructor_task) + self.assertFalse(succeeded) + self.assertEquals(message, "No parsable status information available") + + def test_get_info_for_empty_output(self): + # check for JSON task_output with missing keys + instructor_task = self._create_success_entry() + instructor_task.task_output = "{}" + succeeded, message = get_task_completion_info(instructor_task) + self.assertFalse(succeeded) + self.assertEquals(message, "No progress status information available") + + def test_get_info_for_broken_input(self): + # check for non-JSON task_input, but then just ignore it + instructor_task = self._create_success_entry() + instructor_task.task_input = "{ bad" + succeeded, message = get_task_completion_info(instructor_task) + self.assertFalse(succeeded) + self.assertEquals(message, "Problem rescored for 2 of 3 students (out of 5)") + diff --git a/lms/djangoapps/instructor_task/views.py b/lms/djangoapps/instructor_task/views.py new file mode 100644 index 0000000000..40f128d08e --- /dev/null +++ b/lms/djangoapps/instructor_task/views.py @@ -0,0 +1,172 @@ + +import json +import logging + +from django.http import HttpResponse + +from celery.states import FAILURE, REVOKED, READY_STATES + +from instructor_task.api_helper import (get_status_from_instructor_task, + get_updated_instructor_task) +from instructor_task.models import PROGRESS + + +log = logging.getLogger(__name__) + +# return status for completed tasks and tasks in progress +STATES_WITH_STATUS = [state for state in READY_STATES] + [PROGRESS] + + +def _get_instructor_task_status(task_id): + """ + Returns status for a specific task. + + Written as an internal method here (rather than as a helper) + so that get_task_completion_info() can be called without + causing a circular dependency (since it's also called directly). + """ + instructor_task = get_updated_instructor_task(task_id) + status = get_status_from_instructor_task(instructor_task) + if instructor_task is not None and instructor_task.task_state in STATES_WITH_STATUS: + succeeded, message = get_task_completion_info(instructor_task) + status['message'] = message + status['succeeded'] = succeeded + return status + + +def instructor_task_status(request): + """ + View method that returns the status of a course-related task or tasks. + + Status is returned as a JSON-serialized dict, wrapped as the content of a HTTPResponse. + + The task_id can be specified to this view in one of three ways: + + * by making a request containing 'task_id' as a parameter with a single value + Returns a dict containing status information for the specified task_id + + * by making a request containing 'task_ids' as a parameter, + with a list of task_id values. + Returns a dict of dicts, with the task_id as key, and the corresponding + dict containing status information for the specified task_id + + Task_id values that are unrecognized are skipped. + + The dict with status information for a task contains the following keys: + 'message': on complete tasks, status message reporting on final progress, + or providing exception message if failed. For tasks in progress, + indicates the current progress. + 'succeeded': on complete tasks or tasks in progress, boolean value indicates if the + task outcome was successful: did it achieve what it set out to do. + This is in contrast with a successful task_state, which indicates that the + task merely completed. + 'task_id': id assigned by LMS and used by celery. + 'task_state': state of task as stored in celery's result store. + 'in_progress': boolean indicating if task is still running. + 'task_progress': dict containing progress information. This includes: + 'attempted': number of attempts made + 'updated': number of attempts that "succeeded" + 'total': number of possible subtasks to attempt + 'action_name': user-visible verb to use in status messages. Should be past-tense. + 'duration_ms': how long the task has (or had) been running. + 'exception': name of exception class raised in failed tasks. + 'message': returned for failed and revoked tasks. + 'traceback': optional, returned if task failed and produced a traceback. + + """ + + output = {} + if 'task_id' in request.REQUEST: + task_id = request.REQUEST['task_id'] + output = _get_instructor_task_status(task_id) + elif 'task_ids[]' in request.REQUEST: + tasks = request.REQUEST.getlist('task_ids[]') + for task_id in tasks: + task_output = _get_instructor_task_status(task_id) + if task_output is not None: + output[task_id] = task_output + + return HttpResponse(json.dumps(output, indent=4)) + + +def get_task_completion_info(instructor_task): + """ + Construct progress message from progress information in InstructorTask entry. + + Returns (boolean, message string) duple, where the boolean indicates + whether the task completed without incident. (It is possible for a + task to attempt many sub-tasks, such as rescoring many students' problem + responses, and while the task runs to completion, some of the students' + responses could not be rescored.) + + Used for providing messages to instructor_task_status(), as well as + external calls for providing course task submission history information. + """ + succeeded = False + + if instructor_task.task_state not in STATES_WITH_STATUS: + return (succeeded, "No status information available") + + # we're more surprised if there is no output for a completed task, but just warn: + if instructor_task.task_output is None: + log.warning("No task_output information found for instructor_task {0}".format(instructor_task.task_id)) + return (succeeded, "No status information available") + + try: + task_output = json.loads(instructor_task.task_output) + except ValueError: + fmt = "No parsable task_output information found for instructor_task {0}: {1}" + log.warning(fmt.format(instructor_task.task_id, instructor_task.task_output)) + return (succeeded, "No parsable status information available") + + if instructor_task.task_state in [FAILURE, REVOKED]: + return (succeeded, task_output.get('message', 'No message provided')) + + if any([key not in task_output for key in ['action_name', 'attempted', 'updated', 'total']]): + fmt = "Invalid task_output information found for instructor_task {0}: {1}" + log.warning(fmt.format(instructor_task.task_id, instructor_task.task_output)) + return (succeeded, "No progress status information available") + + action_name = task_output['action_name'] + num_attempted = task_output['attempted'] + num_updated = task_output['updated'] + num_total = task_output['total'] + + student = None + try: + task_input = json.loads(instructor_task.task_input) + except ValueError: + fmt = "No parsable task_input information found for instructor_task {0}: {1}" + log.warning(fmt.format(instructor_task.task_id, instructor_task.task_input)) + else: + student = task_input.get('student') + + if instructor_task.task_state == PROGRESS: + # special message for providing progress updates: + msg_format = "Progress: {action} {updated} of {attempted} so far" + elif student is not None: + if num_attempted == 0: + msg_format = "Unable to find submission to be {action} for student '{student}'" + elif num_updated == 0: + msg_format = "Problem failed to be {action} for student '{student}'" + else: + succeeded = True + msg_format = "Problem successfully {action} for student '{student}'" + elif num_attempted == 0: + msg_format = "Unable to find any students with submissions to be {action}" + elif num_updated == 0: + msg_format = "Problem failed to be {action} for any of {attempted} students" + elif num_updated == num_attempted: + succeeded = True + msg_format = "Problem successfully {action} for {attempted} students" + else: # num_updated < num_attempted + msg_format = "Problem {action} for {updated} of {attempted} students" + + if student is None and num_attempted != num_total: + msg_format += " (out of {total})" + + # Update status in task result object itself: + message = msg_format.format(action=action_name, updated=num_updated, + attempted=num_attempted, total=num_total, + student=student) + return (succeeded, message) diff --git a/lms/djangoapps/lms_migration/management/commands/create_user.py b/lms/djangoapps/lms_migration/management/commands/create_user.py index 86b355e571..ca0e1a756f 100644 --- a/lms/djangoapps/lms_migration/management/commands/create_user.py +++ b/lms/djangoapps/lms_migration/management/commands/create_user.py @@ -18,6 +18,7 @@ from django.core.management.base import BaseCommand from student.models import UserProfile, Registration from external_auth.models import ExternalAuthMap from django.contrib.auth.models import User, Group +from pytz import UTC class MyCompleter(object): # Custom completer @@ -124,7 +125,7 @@ class Command(BaseCommand): external_credentials=json.dumps(credentials), ) eamap.user = user - eamap.dtsignup = datetime.datetime.now() + eamap.dtsignup = datetime.datetime.now(UTC) eamap.save() print "User %s created successfully!" % user diff --git a/lms/djangoapps/open_ended_grading/tests.py b/lms/djangoapps/open_ended_grading/tests.py index 13d780df12..3b6c992881 100644 --- a/lms/djangoapps/open_ended_grading/tests.py +++ b/lms/djangoapps/open_ended_grading/tests.py @@ -160,7 +160,7 @@ class TestPeerGradingService(LoginEnrollmentTestCase): self.course_id = "edX/toy/2012_Fall" self.toy = modulestore().get_course(self.course_id) location = "i4x://edX/toy/peergrading/init" - model_data = {'data': ""} + model_data = {'data': "", 'location': location} self.mock_service = peer_grading_service.MockPeerGradingService() self.system = ModuleSystem( ajax_url=location, @@ -172,9 +172,9 @@ class TestPeerGradingService(LoginEnrollmentTestCase): s3_interface=test_util_open_ended.S3_INTERFACE, open_ended_grading_interface=test_util_open_ended.OPEN_ENDED_GRADING_INTERFACE ) - self.descriptor = peer_grading_module.PeerGradingDescriptor(self.system, location, model_data) - model_data = {} - self.peer_module = peer_grading_module.PeerGradingModule(self.system, location, self.descriptor, model_data) + self.descriptor = peer_grading_module.PeerGradingDescriptor(self.system, model_data) + model_data = {'location': location} + self.peer_module = peer_grading_module.PeerGradingModule(self.system, self.descriptor, model_data) self.peer_module.peer_gs = self.mock_service self.logout() diff --git a/lms/djangoapps/psychometrics/psychoanalyze.py b/lms/djangoapps/psychometrics/psychoanalyze.py index dd60776594..ab9a5e6242 100644 --- a/lms/djangoapps/psychometrics/psychoanalyze.py +++ b/lms/djangoapps/psychometrics/psychoanalyze.py @@ -15,6 +15,7 @@ from scipy.optimize import curve_fit from django.conf import settings from django.db.models import Sum, Max from psychometrics.models import * +from pytz import UTC log = logging.getLogger("mitx.psychometrics") @@ -110,7 +111,7 @@ def make_histogram(ydata, bins=None): nbins = len(bins) hist = dict(zip(bins, [0] * nbins)) for y in ydata: - for b in bins[::-1]: # in reverse order + for b in bins[::-1]: # in reverse order if y > b: hist[b] += 1 break @@ -149,7 +150,7 @@ def generate_plots_for_problem(problem): agdat = pmdset.aggregate(Sum('attempts'), Max('attempts')) max_attempts = agdat['attempts__max'] - total_attempts = agdat['attempts__sum'] # not used yet + total_attempts = agdat['attempts__sum'] # not used yet msg += "max attempts = %d" % max_attempts @@ -200,7 +201,7 @@ def generate_plots_for_problem(problem): dtsv = StatVar() for pmd in pmdset: try: - checktimes = eval(pmd.checktimes) # update log of attempt timestamps + checktimes = eval(pmd.checktimes) # update log of attempt timestamps except: continue if len(checktimes) < 2: @@ -208,7 +209,7 @@ def generate_plots_for_problem(problem): ct0 = checktimes[0] for ct in checktimes[1:]: dt = (ct - ct0).total_seconds() / 60.0 - if dt < 20: # ignore if dt too long + if dt < 20: # ignore if dt too long dtset.append(dt) dtsv += dt ct0 = ct @@ -244,7 +245,7 @@ def generate_plots_for_problem(problem): ylast = y + ylast yset['ydat'] = ydat - if len(ydat) > 3: # try to fit to logistic function if enough data points + if len(ydat) > 3: # try to fit to logistic function if enough data points try: cfp = curve_fit(func_2pl, xdat, ydat, [1.0, max_attempts / 2.0]) yset['fitparam'] = cfp @@ -337,10 +338,10 @@ def make_psychometrics_data_update_handler(course_id, user, module_state_key): log.exception("no attempts for %s (state=%s)" % (sm, sm.state)) try: - checktimes = eval(pmd.checktimes) # update log of attempt timestamps + checktimes = eval(pmd.checktimes) # update log of attempt timestamps except: checktimes = [] - checktimes.append(datetime.datetime.now()) + checktimes.append(datetime.datetime.now(UTC)) pmd.checktimes = checktimes try: pmd.save() diff --git a/lms/djangoapps/simplewiki/models.py b/lms/djangoapps/simplewiki/models.py index 75cdb8aa7a..4026f40b87 100644 --- a/lms/djangoapps/simplewiki/models.py +++ b/lms/djangoapps/simplewiki/models.py @@ -11,6 +11,7 @@ from markdown import markdown from .wiki_settings import * from util.cache import cache +from pytz import UTC class ShouldHaveExactlyOneRootSlug(Exception): @@ -265,7 +266,7 @@ class Revision(models.Model): return else: import datetime - self.article.modified_on = datetime.datetime.now() + self.article.modified_on = datetime.datetime.now(UTC) self.article.save() # Increment counter according to previous revision diff --git a/lms/envs/acceptance.py b/lms/envs/acceptance.py index 3b87bb4326..700fc89670 100644 --- a/lms/envs/acceptance.py +++ b/lms/envs/acceptance.py @@ -24,7 +24,7 @@ modulestore_options = { 'db': 'test_xmodule', 'collection': 'acceptance_modulestore', 'fs_root': TEST_ROOT / "data", - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } MODULESTORE = { diff --git a/lms/envs/aws.py b/lms/envs/aws.py index 1645505dd7..c8c49c2b1e 100644 --- a/lms/envs/aws.py +++ b/lms/envs/aws.py @@ -172,17 +172,19 @@ for name, value in ENV_TOKENS.get("CODE_JAIL", {}).items(): COURSES_WITH_UNSAFE_CODE = ENV_TOKENS.get("COURSES_WITH_UNSAFE_CODE", []) -# If segment.io key specified, load it and turn on segment IO if the feature flag is set -SEGMENT_IO_LMS_KEY = ENV_TOKENS.get('SEGMENT_IO_LMS_KEY') -if SEGMENT_IO_LMS_KEY: - MITX_FEATURES['SEGMENT_IO_LMS'] = ENV_TOKENS.get('SEGMENT_IO_LMS', False) - ############################## SECURE AUTH ITEMS ############### # Secret things: passwords, access keys, etc. with open(ENV_ROOT / CONFIG_PREFIX + "auth.json") as auth_file: AUTH_TOKENS = json.load(auth_file) +############### Mixed Related(Secure/Not-Secure) Items ########## +# If Segment.io key specified, load it and enable Segment.io if the feature flag is set +SEGMENT_IO_LMS_KEY = AUTH_TOKENS.get('SEGMENT_IO_LMS_KEY') +if SEGMENT_IO_LMS_KEY: + MITX_FEATURES['SEGMENT_IO_LMS'] = ENV_TOKENS.get('SEGMENT_IO_LMS', False) + + SECRET_KEY = AUTH_TOKENS['SECRET_KEY'] AWS_ACCESS_KEY_ID = AUTH_TOKENS["AWS_ACCESS_KEY_ID"] diff --git a/lms/envs/cms/dev.py b/lms/envs/cms/dev.py index e55c6d61b5..f8c43148b0 100644 --- a/lms/envs/cms/dev.py +++ b/lms/envs/cms/dev.py @@ -21,7 +21,7 @@ modulestore_options = { 'db': 'xmodule', 'collection': 'modulestore', 'fs_root': DATA_DIR, - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } MODULESTORE = { diff --git a/lms/envs/common.py b/lms/envs/common.py index b2c6f15c39..076528e91e 100644 --- a/lms/envs/common.py +++ b/lms/envs/common.py @@ -50,8 +50,8 @@ MITX_FEATURES = { 'SAMPLE': False, 'USE_DJANGO_PIPELINE': True, 'DISPLAY_HISTOGRAMS_TO_STAFF': True, - 'REROUTE_ACTIVATION_EMAIL': False, # nonempty string = address for all activation emails - 'DEBUG_LEVEL': 0, # 0 = lowest level, least verbose, 255 = max level, most verbose + 'REROUTE_ACTIVATION_EMAIL': False, # nonempty string = address for all activation emails + 'DEBUG_LEVEL': 0, # 0 = lowest level, least verbose, 255 = max level, most verbose ## DO NOT SET TO True IN THIS FILE ## Doing so will cause all courses to be released on production @@ -67,13 +67,13 @@ MITX_FEATURES = { # university to use for branding purposes 'SUBDOMAIN_BRANDING': False, - 'FORCE_UNIVERSITY_DOMAIN': False, # set this to the university domain to use, as an override to HTTP_HOST + 'FORCE_UNIVERSITY_DOMAIN': False, # set this to the university domain to use, as an override to HTTP_HOST # set to None to do no university selection 'ENABLE_TEXTBOOK': True, 'ENABLE_DISCUSSION_SERVICE': True, - 'ENABLE_PSYCHOMETRICS': False, # real-time psychometrics (eg item response theory analysis in instructor dashboard) + 'ENABLE_PSYCHOMETRICS': False, # real-time psychometrics (eg item response theory analysis in instructor dashboard) 'ENABLE_DJANGO_ADMIN_SITE': False, # set true to enable django's admin site, even on prod (e.g. for course ops) 'ENABLE_SQL_TRACKING_LOGS': False, @@ -84,7 +84,7 @@ MITX_FEATURES = { 'DISABLE_LOGIN_BUTTON': False, # used in systems where login is automatic, eg MIT SSL - 'STUB_VIDEO_FOR_TESTING': False, # do not display video when running automated acceptance tests + 'STUB_VIDEO_FOR_TESTING': False, # do not display video when running automated acceptance tests # extrernal access methods 'ACCESS_REQUIRE_STAFF_FOR_COURSE': False, @@ -122,7 +122,10 @@ MITX_FEATURES = { 'USE_CUSTOM_THEME': False, # Do autoplay videos for students - 'AUTOPLAY_VIDEOS': True + 'AUTOPLAY_VIDEOS': True, + + # Enable instructor dash to submit background tasks + 'ENABLE_INSTRUCTOR_BACKGROUND_TASKS': True, } # Used for A/B testing @@ -132,7 +135,7 @@ DEFAULT_GROUPS = [] GENERATE_PROFILE_SCORES = False # Used with XQueue -XQUEUE_WAITTIME_BETWEEN_REQUESTS = 5 # seconds +XQUEUE_WAITTIME_BETWEEN_REQUESTS = 5 # seconds ############################# SET PATH INFORMATION ############################# @@ -192,8 +195,8 @@ TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.static', 'django.contrib.messages.context_processors.messages', #'django.core.context_processors.i18n', - 'django.contrib.auth.context_processors.auth', # this is required for admin - 'django.core.context_processors.csrf', # necessary for csrf protection + 'django.contrib.auth.context_processors.auth', # this is required for admin + 'django.core.context_processors.csrf', # necessary for csrf protection # Added for django-wiki 'django.core.context_processors.media', @@ -206,7 +209,7 @@ TEMPLATE_CONTEXT_PROCESSORS = ( 'mitxmako.shortcuts.marketing_link_context_processor', ) -STUDENT_FILEUPLOAD_MAX_SIZE = 4 * 1000 * 1000 # 4 MB +STUDENT_FILEUPLOAD_MAX_SIZE = 4 * 1000 * 1000 # 4 MB MAX_FILEUPLOADS_PER_INPUT = 20 # FIXME: @@ -216,7 +219,7 @@ LIB_URL = '/static/js/' # Dev machines shouldn't need the book # BOOK_URL = '/static/book/' -BOOK_URL = 'https://mitxstatic.s3.amazonaws.com/book_images/' # For AWS deploys +BOOK_URL = 'https://mitxstatic.s3.amazonaws.com/book_images/' # For AWS deploys # RSS_URL = r'lms/templates/feed.rss' # PRESS_URL = r'' RSS_TIMEOUT = 600 @@ -240,14 +243,14 @@ COURSE_TITLE = "Circuits and Electronics" ### Dark code. Should be enabled in local settings for devel. -ENABLE_MULTICOURSE = False # set to False to disable multicourse display (see lib.util.views.mitxhome) +ENABLE_MULTICOURSE = False # set to False to disable multicourse display (see lib.util.views.mitxhome) WIKI_ENABLED = False ### COURSE_DEFAULT = '6.002x_Fall_2012' -COURSE_SETTINGS = {'6.002x_Fall_2012': {'number': '6.002x', +COURSE_SETTINGS = {'6.002x_Fall_2012': {'number': '6.002x', 'title': 'Circuits and Electronics', 'xmlpath': '6002x/', 'location': 'i4x://edx/6002xs12/course/6.002x_Fall_2012', @@ -308,6 +311,7 @@ import monitoring.exceptions # noqa # Change DEBUG/TEMPLATE_DEBUG in your environment settings files, not here DEBUG = False TEMPLATE_DEBUG = False +USE_TZ = True # Site info SITE_ID = 1 @@ -342,8 +346,8 @@ STATICFILES_DIRS = [ FAVICON_PATH = 'images/favicon.ico' # Locale/Internationalization -TIME_ZONE = 'America/New_York' # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name -LANGUAGE_CODE = 'en' # http://www.i18nguy.com/unicode/language-identifiers.html +TIME_ZONE = 'America/New_York' # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name +LANGUAGE_CODE = 'en' # http://www.i18nguy.com/unicode/language-identifiers.html USE_I18N = True USE_L10N = True @@ -364,7 +368,7 @@ ALLOWED_GITRELOAD_IPS = ['207.97.227.253', '50.57.128.197', '108.171.174.178'] # setting is, I'm just bumping the expiration time to something absurd (100 # years). This is only used if DEFAULT_FILE_STORAGE is overriden to use S3 # in the global settings.py -AWS_QUERYSTRING_EXPIRE = 10 * 365 * 24 * 60 * 60 # 10 years +AWS_QUERYSTRING_EXPIRE = 10 * 365 * 24 * 60 * 60 # 10 years ################################# SIMPLEWIKI ################################### SIMPLE_WIKI_REQUIRE_LOGIN_EDIT = True @@ -373,8 +377,8 @@ SIMPLE_WIKI_REQUIRE_LOGIN_VIEW = False ################################# WIKI ################################### WIKI_ACCOUNT_HANDLING = False WIKI_EDITOR = 'course_wiki.editors.CodeMirror' -WIKI_SHOW_MAX_CHILDREN = 0 # We don't use the little menu that shows children of an article in the breadcrumb -WIKI_ANONYMOUS = False # Don't allow anonymous access until the styling is figured out +WIKI_SHOW_MAX_CHILDREN = 0 # We don't use the little menu that shows children of an article in the breadcrumb +WIKI_ANONYMOUS = False # Don't allow anonymous access until the styling is figured out WIKI_CAN_CHANGE_PERMISSIONS = lambda article, user: user.is_staff or user.is_superuser WIKI_CAN_ASSIGN = lambda article, user: user.is_staff or user.is_superuser @@ -592,7 +596,7 @@ if os.path.isdir(DATA_DIR): new_filename = os.path.splitext(filename)[0] + ".js" if os.path.exists(js_dir / new_filename): coffee_timestamp = os.stat(js_dir / filename).st_mtime - js_timestamp = os.stat(js_dir / new_filename).st_mtime + js_timestamp = os.stat(js_dir / new_filename).st_mtime if coffee_timestamp <= js_timestamp: continue os.system("rm %s" % (js_dir / new_filename)) @@ -690,15 +694,16 @@ INSTALLED_APPS = ( 'util', 'certificates', 'instructor', + 'instructor_task', 'open_ended_grading', 'psychometrics', 'licenses', 'course_groups', #For the wiki - 'wiki', # The new django-wiki from benjaoming + 'wiki', # The new django-wiki from benjaoming 'django_notify', - 'course_wiki', # Our customizations + 'course_wiki', # Our customizations 'mptt', 'sekizai', #'wiki.plugins.attachments', @@ -710,7 +715,7 @@ INSTALLED_APPS = ( 'foldit', # For testing - 'django.contrib.admin', # only used in DEBUG mode + 'django.contrib.admin', # only used in DEBUG mode 'debug', # Discussion forums diff --git a/lms/envs/dev_mongo.py b/lms/envs/dev_mongo.py index dfbf473b45..1f6b5899f1 100644 --- a/lms/envs/dev_mongo.py +++ b/lms/envs/dev_mongo.py @@ -19,7 +19,7 @@ MODULESTORE = { 'db': 'xmodule', 'collection': 'modulestore', 'fs_root': GITHUB_REPO_ROOT, - 'render_template': 'mitxmako.shortcuts.render_to_string', + 'render_template': 'mitxmako.shortcuts.render_to_string' } } } diff --git a/lms/envs/test.py b/lms/envs/test.py index 6691d50106..3ccfa24014 100644 --- a/lms/envs/test.py +++ b/lms/envs/test.py @@ -20,8 +20,10 @@ from path import path # can test everything else :) MITX_FEATURES['DISABLE_START_DATES'] = True -# Until we have discussion actually working in test mode, just turn it off -MITX_FEATURES['ENABLE_DISCUSSION_SERVICE'] = True +# Most tests don't use the discussion service, so we turn it off to speed them up. +# Tests that do can enable this flag, but must use the UrlResetMixin class to force urls.py +# to reload +MITX_FEATURES['ENABLE_DISCUSSION_SERVICE'] = False MITX_FEATURES['ENABLE_SERVICE_STATUS'] = True diff --git a/lms/static/js/pending_tasks.js b/lms/static/js/pending_tasks.js new file mode 100644 index 0000000000..ebeb896efa --- /dev/null +++ b/lms/static/js/pending_tasks.js @@ -0,0 +1,100 @@ +// Define an InstructorTaskProgress object for updating a table on the instructor +// dashboard that shows the current background tasks that are currently running +// for the instructor's course. Any tasks that were running when the page is +// first displayed are passed in as instructor_tasks, and populate the "Pending Instructor +// Task" table. The InstructorTaskProgress is bound to this table, and periodically +// polls the LMS to see if any of the tasks has completed. Once a task is complete, +// it is not included in any further polling. + +(function() { + + var __bind = function(fn, me){ return function(){ return fn.apply(me, arguments); }; }; + + this.InstructorTaskProgress = (function() { + + function InstructorTaskProgress(element) { + this.update_progress = __bind(this.update_progress, this); + this.get_status = __bind(this.get_status, this); + this.element = element; + this.entries = $(element).find('.task-progress-entry') + if (window.queuePollerID) { + window.clearTimeout(window.queuePollerID); + } + // Hardcode the initial delay before the first refresh to one second: + window.queuePollerID = window.setTimeout(this.get_status, 1000); + } + + InstructorTaskProgress.prototype.$ = function(selector) { + return $(selector, this.element); + }; + + InstructorTaskProgress.prototype.update_progress = function(response) { + var _this = this; + // Response should be a dict with an entry for each requested task_id, + // with a "task-state" and "in_progress" key and optionally a "message" + // and a "task_progress.duration" key. + var something_in_progress = false; + for (task_id in response) { + var task_dict = response[task_id]; + // find the corresponding entry, and update it: + entry = $(_this.element).find('[data-task-id="' + task_id + '"]'); + entry.find('.task-state').text(task_dict.task_state) + var duration_value = (task_dict.task_progress && task_dict.task_progress.duration_ms + && Math.round(task_dict.task_progress.duration_ms/1000)) || 'unknown'; + entry.find('.task-duration').text(duration_value); + var progress_value = task_dict.message || ''; + entry.find('.task-progress').text(progress_value); + // if the task is complete, then change the entry so it won't + // be queried again. Otherwise set a flag. + if (task_dict.in_progress === true) { + something_in_progress = true; + } else { + entry.data('inProgress', "False") + } + } + + // if some entries are still incomplete, then repoll: + // Hardcode the refresh interval to be every five seconds. + // TODO: allow the refresh interval to be set. (And if it is disabled, + // then don't set the timeout at all.) + if (something_in_progress) { + window.queuePollerID = window.setTimeout(_this.get_status, 5000); + } else { + delete window.queuePollerID; + } + } + + InstructorTaskProgress.prototype.get_status = function() { + var _this = this; + var task_ids = []; + + // Construct the array of ids to get status for, by + // including the subset of entries that are still in progress. + this.entries.each(function(idx, element) { + var task_id = $(element).data('taskId'); + var in_progress = $(element).data('inProgress'); + if (in_progress="True") { + task_ids.push(task_id); + } + }); + + // Make call to get status for these ids. + // Note that the keyname here ends up with "[]" being appended + // in the POST parameter that shows up on the Django server. + // TODO: add error handler. + var ajax_url = '/instructor_task_status/'; + var data = {'task_ids': task_ids }; + $.post(ajax_url, data).done(this.update_progress); + }; + + return InstructorTaskProgress; + })(); + +}).call(this); + +// once the page is rendered, create the progress object +var instructorTaskProgress; +$(document).ready(function() { + instructorTaskProgress = new InstructorTaskProgress($('#task-progress-wrapper')); +}); + diff --git a/lms/templates/courseware/instructor_dashboard.html b/lms/templates/courseware/instructor_dashboard.html index aa3c33e43a..ef1eb174fc 100644 --- a/lms/templates/courseware/instructor_dashboard.html +++ b/lms/templates/courseware/instructor_dashboard.html @@ -9,7 +9,9 @@ - +%if instructor_tasks is not None: + > +%endif <%include file="/courseware/course_navigation.html" args="active_page='instructor'" /> @@ -193,19 +195,78 @@ function goto( mode)
    + %endif + %if settings.MITX_FEATURES.get('ENABLE_INSTRUCTOR_BACKGROUND_TASKS'): +

    Course-specific grade adjustment

    + +

    + Specify a particular problem in the course here by its url: + +

    +

    + You may use just the "urlname" if a problem, or "modulename/urlname" if not. + (For example, if the location is i4x://university/course/problem/problemname, + then just provide the problemname. + If the location is i4x://university/course/notaproblem/someothername, then + provide notaproblem/someothername.) +

    +

    + Then select an action: + + +

    +

    +

    These actions run in the background, and status for active tasks will appear in a table below. + To see status for all tasks submitted for this problem, click on this button: +

    +

    + +

    + +
    %endif

    Student-specific grade inspection and adjustment

    -

    edX email address or their username:

    -

    -

    and, if you want to reset the number of attempts for a problem, the urlname of that problem

    -

    +

    + Specify the edX email address or username of a student here: + +

    +

    + Click this, and a link to student's progress page will appear below: + +

    +

    + Specify a particular problem in the course here by its url: + +

    +

    + You may use just the "urlname" if a problem, or "modulename/urlname" if not. + (For example, if the location is i4x://university/course/problem/problemname, + then just provide the problemname. + If the location is i4x://university/course/notaproblem/someothername, then + provide notaproblem/someothername.) +

    +

    + Then select an action: + + %if settings.MITX_FEATURES.get('ENABLE_COURSE_BACKGROUND_TASKS'): + + %endif +

    %if instructor_access: -

    You may also delete the entire state of a student for a problem: -

    -

    To delete the state of other XBlocks specify modulename/urlname, eg - combinedopenended/Humanities_SA_Peer

    +

    + You may also delete the entire state of a student for the specified module: + +

    + %endif + %if settings.MITX_FEATURES.get('ENABLE_COURSE_BACKGROUND_TASKS'): +

    Rescoring runs in the background, and status for active tasks will appear in a table below. + To see status for all tasks submitted for this course and student, click on this button: +

    +

    + +

    %endif %endif @@ -233,6 +294,7 @@ function goto( mode) ##----------------------------------------------------------------------------- %if modeflag.get('Admin'): + %if instructor_access:

    @@ -372,6 +434,7 @@ function goto( mode) %if msg:

    ${msg}

    %endif + ##----------------------------------------------------------------------------- %if modeflag.get('Analytics'): @@ -558,6 +621,69 @@ function goto( mode)

    %endif +## Output tasks in progress + +%if instructor_tasks is not None and len(instructor_tasks) > 0: +
    +

    Pending Instructor Tasks

    +
    +
    Name Date Added URL
    + +
    + + + + + + + + + + + %for tasknum, instructor_task in enumerate(instructor_tasks): + + + + + + + + + + + %endfor +
    Task TypeTask inputsTask IdRequesterSubmittedTask StateDuration (sec)Task Progress
    ${instructor_task.task_type}${instructor_task.task_input}${instructor_task.task_id}${instructor_task.requester}${instructor_task.created}${instructor_task.task_state}unknownunknown
    + +
    + +%endif + +##----------------------------------------------------------------------------- + +%if course_stats and modeflag.get('Psychometrics') is None: + +
    +
    +

    +


    +

    ${course_stats['title'] | h}

    + + + %for hname in course_stats['header']: + + %endfor + + %for row in course_stats['data']: + + %for value in row: + + %endfor + + %endfor +
    ${hname | h}
    ${value | h}
    +

    +%endif + ##----------------------------------------------------------------------------- %if modeflag.get('Psychometrics'): diff --git a/lms/templates/instructor/staff_grading.html b/lms/templates/instructor/staff_grading.html index 1c5f7364ad..0a28a2b026 100644 --- a/lms/templates/instructor/staff_grading.html +++ b/lms/templates/instructor/staff_grading.html @@ -29,7 +29,7 @@

    Instructions

    -

    This is the list of problems that current need to be graded in order to train the machine learning models. Each problem needs to be trained separately, and we have indicated the number of student submissions that need to be graded in order for a model to be generated. You can grade more than the minimum required number of submissions--this will improve the accuracy of machine learning, though with diminishing returns. You can see the current accuracy of machine learning while grading.

    +

    This is the list of problems that currently need to be graded in order to train the machine learning models. Each problem needs to be trained separately, and we have indicated the number of student submissions that need to be graded in order for a model to be generated. You can grade more than the minimum required number of submissions--this will improve the accuracy of machine learning, though with diminishing returns. You can see the current accuracy of machine learning while grading.

    Problem List

    diff --git a/lms/templates/videoalpha.html b/lms/templates/videoalpha.html index 2028d3c320..07c7dbee27 100644 --- a/lms/templates/videoalpha.html +++ b/lms/templates/videoalpha.html @@ -18,6 +18,7 @@ data-start="${start}" data-end="${end}" data-caption-asset-path="${caption_asset_path}" + data-autoplay="${autoplay}" >
    diff --git a/lms/templates/widgets/segment-io.html b/lms/templates/widgets/segment-io.html index 6b4ace8375..dea222653e 100644 --- a/lms/templates/widgets/segment-io.html +++ b/lms/templates/widgets/segment-io.html @@ -1,7 +1,9 @@ -% if settings.MITX_FEATURES.get('SEGMENT_IO_LMS'): -% else: - - - -% endif diff --git a/lms/urls.py b/lms/urls.py index 8f393584ac..1d34ebf3af 100644 --- a/lms/urls.py +++ b/lms/urls.py @@ -98,6 +98,8 @@ if not settings.MITX_FEATURES["USE_CUSTOM_THEME"]: url(r'^press$', 'student.views.press', name="press"), url(r'^media-kit$', 'static_template_view.views.render', {'template': 'media-kit.html'}, name="media-kit"), + url(r'^faq$', 'static_template_view.views.render', + {'template': 'faq.html'}, name="faq_edx"), url(r'^help$', 'static_template_view.views.render', {'template': 'help.html'}, name="help_edx"), @@ -125,7 +127,7 @@ for key, value in settings.MKTG_URL_LINK_MAP.items(): continue # These urls are enabled separately - if key == "ROOT" or key == "COURSES": + if key == "ROOT" or key == "COURSES" or key == "FAQ": continue # Make the assumptions that the templates are all in the same dir @@ -392,6 +394,11 @@ if settings.MITX_FEATURES.get('ENABLE_SERVICE_STATUS'): url(r'^status/', include('service_status.urls')), ) +if settings.MITX_FEATURES.get('ENABLE_INSTRUCTOR_BACKGROUND_TASKS'): + urlpatterns += ( + url(r'^instructor_task_status/$', 'instructor_task.views.instructor_task_status', name='instructor_task_status'), + ) + # FoldIt views urlpatterns += ( # The path is hardcoded into their app... diff --git a/lms/xmodule_namespace.py b/lms/xmodule_namespace.py index 6b78d18db0..aaef0b76db 100644 --- a/lms/xmodule_namespace.py +++ b/lms/xmodule_namespace.py @@ -1,15 +1,15 @@ """ Namespace that defines fields common to all blocks used in the LMS """ -from xblock.core import Namespace, Boolean, Scope, String -from xmodule.fields import Date, Timedelta, StringyFloat, StringyBoolean +from xblock.core import Namespace, Boolean, Scope, String, Float +from xmodule.fields import Date, Timedelta class LmsNamespace(Namespace): """ Namespace that defines fields common to all blocks used in the LMS """ - hide_from_toc = StringyBoolean( + hide_from_toc = Boolean( help="Whether to display this module in the table of contents", default=False, scope=Scope.settings @@ -37,7 +37,7 @@ class LmsNamespace(Namespace): ) showanswer = String(help="When to show the problem answer to the student", scope=Scope.settings, default="closed") rerandomize = String(help="When to rerandomize the problem", default="always", scope=Scope.settings) - days_early_for_beta = StringyFloat( + days_early_for_beta = Float( help="Number of days early to show content to beta users", default=None, scope=Scope.settings diff --git a/rakefiles/assets.rake b/rakefiles/assets.rake index 009c87048c..764d049a68 100644 --- a/rakefiles/assets.rake +++ b/rakefiles/assets.rake @@ -52,7 +52,7 @@ def sass_cmd(watch=false, debug=false) "sass #{debug ? '--debug-info' : '--style compressed'} " + "--load-path #{sass_load_paths.join(' ')} " + "--require ./common/static/sass/bourbon/lib/bourbon.rb " + - "#{watch ? '--watch' : '--update'} #{sass_watch_paths.join(' ')}" + "#{watch ? '--watch' : '--update'} -E utf-8 #{sass_watch_paths.join(' ')}" end desc "Compile all assets" @@ -78,7 +78,7 @@ namespace :assets do end {:xmodule => [:install_python_prereqs], - :coffee => [:install_node_prereqs], + :coffee => [:install_node_prereqs, :'assets:coffee:clobber'], :sass => [:install_ruby_prereqs, :preprocess]}.each_pair do |asset_type, prereq_tasks| desc "Compile all #{asset_type} assets" task asset_type => prereq_tasks do @@ -127,6 +127,11 @@ namespace :assets do multitask :coffee => 'assets:xmodule' namespace :coffee do multitask :debug => 'assets:xmodule:debug' + + desc "Remove compiled coffeescript files" + task :clobber do + FileList['*/static/coffee/**/*.js'].each {|f| File.delete(f)} + end end namespace :xmodule do diff --git a/rakefiles/jasmine.rake b/rakefiles/jasmine.rake index ab3209c9ec..0f532fdf6f 100644 --- a/rakefiles/jasmine.rake +++ b/rakefiles/jasmine.rake @@ -61,10 +61,10 @@ def template_jasmine_runner(lib) yield File.expand_path(template_output) end -def jasmine_browser(url, wait=10) +def jasmine_browser(url, jitter=3, wait=10) # Jitter starting the browser so that the tests don't all try and # start the browser simultaneously - sleep(rand(3)) + sleep(rand(jitter)) sh("python -m webbrowser -t '#{url}'") sleep(wait) end @@ -87,6 +87,15 @@ end end end + desc "Open jasmine tests for #{system} in your default browser, and dynamically recompile coffeescript" + task :'browser:watch' => :'assets:coffee:_watch' do + django_for_jasmine(system, true) do |jasmine_url| + jasmine_browser(jasmine_url, jitter=0, wait=0) + end + puts "Press ENTER to terminate".red + $stdin.gets + end + desc "Use phantomjs to run jasmine tests for #{system} from the console" task :phantomjs do Rake::Task[:assets].invoke(system, 'jasmine') diff --git a/requirements/edx/github.txt b/requirements/edx/github.txt index 367b22fb46..a0c0a14c8c 100644 --- a/requirements/edx/github.txt +++ b/requirements/edx/github.txt @@ -3,11 +3,11 @@ # Third-party: -e git://github.com/edx/django-staticfiles.git@6d2504e5c8#egg=django-staticfiles -e git://github.com/edx/django-pipeline.git#egg=django-pipeline --e git://github.com/edx/django-wiki.git@e2e84558#egg=django-wiki +-e git://github.com/edx/django-wiki.git@ac906abe#egg=django-wiki -e git://github.com/dementrock/pystache_custom.git@776973740bdaad83a3b029f96e415a7d1e8bec2f#egg=pystache_custom-dev -e git://github.com/eventbrite/zendesk.git@d53fe0e81b623f084e91776bcf6369f8b7b63879#egg=zendesk # Our libraries: --e git+https://github.com/edx/XBlock.git@2144a25d#egg=XBlock --e git+https://github.com/edx/codejail.git@5fb5fa0#egg=codejail +-e git+https://github.com/edx/XBlock.git@4d8735e883#egg=XBlock +-e git+https://github.com/edx/codejail.git@0a1b468#egg=codejail -e git+https://github.com/edx/diff-cover.git@v0.1.1#egg=diff_cover diff --git a/test_root/data/videoalpha/gizmo.mp4 b/test_root/data/videoalpha/gizmo.mp4 new file mode 100644 index 0000000000..1fc478842f Binary files /dev/null and b/test_root/data/videoalpha/gizmo.mp4 differ diff --git a/test_root/data/videoalpha/gizmo.ogv b/test_root/data/videoalpha/gizmo.ogv new file mode 100644 index 0000000000..2c4a447f1f Binary files /dev/null and b/test_root/data/videoalpha/gizmo.ogv differ diff --git a/test_root/data/videoalpha/gizmo.webm b/test_root/data/videoalpha/gizmo.webm new file mode 100644 index 0000000000..95d5031a86 Binary files /dev/null and b/test_root/data/videoalpha/gizmo.webm differ