Merge pull request #19668 from edx/awais/removed_old_lettuce_tests_for_courseware_problems

removed lettuce files
This commit is contained in:
Agha Awais
2019-01-31 10:29:58 +05:00
committed by GitHub
2 changed files with 0 additions and 395 deletions

View File

@@ -1,215 +0,0 @@
@shard_1 @requires_stub_xqueue
Feature: LMS.Answer problems
As a student in an edX course
In order to test my understanding of the material
I want to answer problems
Scenario: I can reset a problem
Given I am viewing a randomization "<Randomization>" "<ProblemType>" problem with reset button on
And I answer a "<ProblemType>" problem "<Correctness>ly"
When I reset the problem
Then my "<ProblemType>" answer is marked "unanswered"
And The "<ProblemType>" problem displays a "blank" answer
Examples:
| ProblemType | Correctness | Randomization |
| drop down | correct | always |
| drop down | incorrect | always |
| multiple choice | correct | always |
| multiple choice | incorrect | always |
| checkbox | correct | always |
| checkbox | incorrect | always |
| radio | correct | always |
| radio | incorrect | always |
| numerical | incorrect | always |
#| formula | correct | always |
#| formula | incorrect | always |
| script | correct | always |
| script | incorrect | always |
| radio_text | correct | always |
| radio_text | incorrect | always |
| checkbox_text | correct | always |
| checkbox_text | incorrect | always |
| image | correct | always |
| image | incorrect | always |
Scenario: I can reset a non-randomized problem that I answer incorrectly
Given I am viewing a randomization "<Randomization>" "<ProblemType>" problem with reset button on
And I answer a "<ProblemType>" problem "<Correctness>ly"
When I reset the problem
Then my "<ProblemType>" answer is marked "unanswered"
And The "<ProblemType>" problem displays a "blank" answer
Examples:
| ProblemType | Correctness | Randomization |
| drop down | incorrect | never |
| multiple choice | incorrect | never |
| checkbox | incorrect | never |
# TE-572
#| radio | incorrect | never |
#| string | incorrect | never |
| numerical | incorrect | never |
#| formula | incorrect | never |
# TE-572 failing intermittently
#| script | incorrect | never |
| radio_text | incorrect | never |
| checkbox_text | incorrect | never |
| image | incorrect | never |
Scenario: The reset button doesn't show up
Given I am viewing a randomization "<Randomization>" "<ProblemType>" problem with reset button on
And I answer a "<ProblemType>" problem "<Correctness>ly"
Then The "Reset" button does not appear
Examples:
| ProblemType | Correctness | Randomization |
| drop down | correct | never |
| multiple choice | correct | never |
| checkbox | correct | never |
| radio | correct | never |
#| string | correct | never |
| numerical | correct | never |
#| formula | correct | never |
| script | correct | never |
| radio_text | correct | never |
| checkbox_text | correct | never |
| image | correct | never |
Scenario: I can answer a problem with one attempt correctly and not reset
Given I am viewing a "multiple choice" problem with "1" attempt
When I answer a "multiple choice" problem "correctly"
Then The "Reset" button does not appear
Scenario: I can answer a problem with multiple attempts correctly and still reset the problem
Given I am viewing a "multiple choice" problem with "3" attempts
Then I should see "You have used 0 of 3 attempts" somewhere in the page
When I answer a "multiple choice" problem "correctly"
Then The "Reset" button does appear
Scenario: I can answer a problem with multiple attempts correctly but cannot reset because randomization is off
Given I am viewing a randomization "never" "multiple choice" problem with "3" attempts with reset
Then I should see "You have used 0 of 3 attempts" somewhere in the page
When I answer a "multiple choice" problem "correctly"
Then The "Reset" button does not appear
Scenario: I can view how many attempts I have left on a problem
Given I am viewing a "multiple choice" problem with "3" attempts
Then I should see "You have used 0 of 3 attempts" somewhere in the page
When I answer a "multiple choice" problem "incorrectly"
And I reset the problem
Then I should see "You have used 1 of 3 attempts" somewhere in the page
When I answer a "multiple choice" problem "incorrectly"
And I reset the problem
Then I should see "You have used 2 of 3 attempts" somewhere in the page
And The "Submit" button does appear
When I answer a "multiple choice" problem "correctly"
Then The "Reset" button does not appear
Scenario: I can view the answer if the problem has it:
Given I am viewing a "numerical" that shows the answer "always"
When I press the button with the label "Show Answer"
And I should see "4.14159" somewhere in the page
Scenario: I can see my score on a problem when I answer it and after I reset it
Given I am viewing a "<ProblemType>" problem
When I answer a "<ProblemType>" problem "<Correctness>ly"
Then I should see a score of "<Score>"
When I reset the problem
Then I should see a score of "<Points Possible>"
Examples:
| ProblemType | Correctness | Score | Points Possible |
| drop down | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| drop down | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
| multiple choice | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| multiple choice | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
| checkbox | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| checkbox | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
| radio | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| radio | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
| numerical | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| numerical | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
#| formula | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
#| formula | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
| script | correct | 2/2 points (ungraded) | 0/2 points (ungraded) |
| script | incorrect | 0/2 points (ungraded) | 0/2 points (ungraded) |
| image | correct | 1/1 point (ungraded) | 0/1 point (ungraded) |
| image | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) |
Scenario: I can see my score on a problem when I answer it and after I reset it
Given I am viewing a "<ProblemType>" problem with randomization "<Randomization>" with reset button on
When I answer a "<ProblemType>" problem "<Correctness>ly"
Then I should see a score of "<Score>"
When I reset the problem
Then I should see a score of "<Points Possible>"
Examples:
| ProblemType | Correctness | Score | Points Possible | Randomization |
| drop down | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| drop down | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
| multiple choice | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| multiple choice | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
| checkbox | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| checkbox | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
| radio | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| radio | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
| numerical | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| numerical | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
#| formula | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
#| formula | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
| script | correct | 2/2 points (ungraded) | 0/2 points (ungraded) | never |
| script | incorrect | 0/2 points (ungraded) | 0/2 points (ungraded) | never |
| image | correct | 1/1 point (ungraded) | 0/1 point (ungraded) | never |
| image | incorrect | 0/1 point (ungraded) | 0/1 point (ungraded) | never |
Scenario: I can see my score on a problem to which I submit a blank answer
Given I am viewing a "<ProblemType>" problem
When I submit a problem
Then I should see a score of "<Points Possible>"
Examples:
| ProblemType | Points Possible |
| image | 0/1 point (ungraded) |
Scenario: I can reset the correctness of a problem after changing my answer
Given I am viewing a "<ProblemType>" problem
Then my "<ProblemType>" answer is marked "unanswered"
When I answer a "<ProblemType>" problem "<InitialCorrectness>ly"
And I input an answer on a "<ProblemType>" problem "<OtherCorrectness>ly"
Then my "<ProblemType>" answer is marked "unanswered"
And I reset the problem
Examples:
| ProblemType | InitialCorrectness | OtherCorrectness |
| drop down | correct | incorrect |
| drop down | incorrect | correct |
| checkbox | correct | incorrect |
| checkbox | incorrect | correct |
#| string | correct | incorrect |
#| string | incorrect | correct |
| numerical | correct | incorrect |
| numerical | incorrect | correct |
#| formula | correct | incorrect |
#| formula | incorrect | correct |
| script | correct | incorrect |
| script | incorrect | correct |
# Radio groups behave slightly differently than other types of checkboxes, because they
# don't put their status to the top left of the boxes (like checkboxes do), thus, they'll
# not ever have a status of "unanswered" once you've made an answer. They should simply NOT
# be marked either correct or incorrect. Arguably this behavior should be changed; when it
# is, these cases should move into the above Scenario.
Scenario: I can reset the correctness of a radiogroup problem after changing my answer
Given I am viewing a "<ProblemType>" problem
When I answer a "<ProblemType>" problem "<InitialCorrectness>ly"
Then my "<ProblemType>" answer is marked "<InitialCorrectness>"
And I reset the problem
Then my "<ProblemType>" answer is NOT marked "<InitialCorrectness>"
And my "<ProblemType>" answer is NOT marked "<OtherCorrectness>"
Examples:
| ProblemType | InitialCorrectness | OtherCorrectness |
| multiple choice | correct | incorrect |
| multiple choice | incorrect | correct |
| radio | correct | incorrect |
| radio | incorrect | correct |

View File

@@ -1,180 +0,0 @@
'''
Steps for problem.feature lettuce tests
'''
# pylint: disable=missing-docstring
# pylint: disable=redefined-outer-name
from lettuce import step, world
from common import i_am_registered_for_the_course, visit_scenario_item
from problems_setup import PROBLEM_DICT, add_problem_to_course, answer_problem, problem_has_answer
def _view_problem(step, problem_type, problem_settings=None):
i_am_registered_for_the_course(step, 'model_course')
# Ensure that the course has this problem type
add_problem_to_course(world.scenario_dict['COURSE'].number, problem_type, problem_settings)
# Go to the one section in the factory-created course
# which should be loaded with the correct problem
visit_scenario_item('SECTION')
@step(u'I am viewing a "([^"]*)" problem with "([^"]*)" attempt')
def view_problem_with_attempts(step, problem_type, attempts):
_view_problem(step, problem_type, {'max_attempts': attempts})
@step(u'I am viewing a randomization "([^"]*)" "([^"]*)" problem with "([^"]*)" attempts with reset')
def view_problem_attempts_reset(step, randomization, problem_type, attempts, ):
_view_problem(step, problem_type, {'max_attempts': attempts,
'rerandomize': randomization,
'show_reset_button': True})
@step(u'I am viewing a "([^"]*)" that shows the answer "([^"]*)"')
def view_problem_with_show_answer(step, problem_type, answer):
_view_problem(step, problem_type, {'showanswer': answer})
@step(u'I am viewing a "([^"]*)" problem')
def view_problem(step, problem_type):
_view_problem(step, problem_type)
@step(u'I am viewing a randomization "([^"]*)" "([^"]*)" problem with reset button on')
def view_random_reset_problem(step, randomization, problem_type):
_view_problem(step, problem_type, {'rerandomize': randomization, 'show_reset_button': True})
@step(u'External graders respond "([^"]*)"')
def set_external_grader_response(step, correctness):
assert(correctness in ['correct', 'incorrect'])
response_dict = {
'correct': True if correctness == 'correct' else False,
'score': 1 if correctness == 'correct' else 0,
'msg': 'Your problem was graded {0}'.format(correctness)
}
# Set the fake xqueue server to always respond
# correct/incorrect when asked to grade a problem
world.xqueue.config['default'] = response_dict
@step(u'I answer a "([^"]*)" problem "([^"]*)ly"')
def answer_problem_step(step, problem_type, correctness):
""" Mark a given problem type correct or incorrect, then submit it.
*problem_type* is a string representing the type of problem (e.g. 'drop down')
*correctness* is in ['correct', 'incorrect']
"""
# Change the answer on the page
input_problem_answer(step, problem_type, correctness)
# Submit the problem
submit_problem(step)
@step(u'I input an answer on a "([^"]*)" problem "([^"]*)ly"')
def input_problem_answer(_, problem_type, correctness):
"""
Have the browser input an answer (either correct or incorrect)
"""
assert correctness in ['correct', 'incorrect']
assert problem_type in PROBLEM_DICT
answer_problem(world.scenario_dict['COURSE'].number, problem_type, correctness)
@step(u'I submit a problem')
# pylint: disable=unused-argument
def submit_problem(step):
# first scroll down so the loading mathjax button does not
# cover up the Submit button
world.browser.execute_script("window.scrollTo(0,1024)")
world.css_click("button.submit")
# Wait for the problem to finish re-rendering
world.wait_for_ajax_complete()
@step(u'The "([^"]*)" problem displays a "([^"]*)" answer')
def assert_problem_has_answer(step, problem_type, answer_class):
'''
Assert that the problem is displaying a particular answer.
These correspond to the same correct/incorrect
answers we set in answer_problem()
We can also check that a problem has been left blank
by setting answer_class='blank'
'''
assert answer_class in ['correct', 'incorrect', 'blank']
assert problem_type in PROBLEM_DICT
problem_has_answer(world.scenario_dict['COURSE'].number, problem_type, answer_class)
@step(u'I reset the problem')
def reset_problem(_step):
world.css_click('button.reset')
# Wait for the problem to finish re-rendering
world.wait_for_ajax_complete()
@step(u'I press the button with the label "([^"]*)"$')
def press_the_button_with_label(_step, buttonname):
button_css = 'button span.show-label'
elem = world.css_find(button_css).first
world.css_has_text(button_css, elem)
world.css_click(button_css)
@step(u'The "([^"]*)" button does( not)? appear')
def action_button_present(_step, buttonname, doesnt_appear):
button_css = 'div.action button[data-value*="%s"]' % buttonname
if bool(doesnt_appear):
assert world.is_css_not_present(button_css)
else:
assert world.is_css_present(button_css)
@step(u'I should see a score of "([^"]*)"$')
def see_score(_step, score):
# The problem progress is changed by
# cms/static/xmodule_js/src/capa/display.js
# so give it some time to render on the page.
score_css = 'div.problem-progress'
expected_text = '{}'.format(score)
world.wait_for(lambda _: world.css_has_text(score_css, expected_text))
@step(u'[Mm]y "([^"]*)" answer is( NOT)? marked "([^"]*)"')
def assert_answer_mark(_step, problem_type, isnt_marked, correctness):
"""
Assert that the expected answer mark is visible
for a given problem type.
*problem_type* is a string identifying the type of problem (e.g. 'drop down')
*correctness* is in ['correct', 'incorrect', 'unanswered']
"""
# Determine which selector(s) to look for based on correctness
assert correctness in ['correct', 'incorrect', 'unanswered']
assert problem_type in PROBLEM_DICT
# At least one of the correct selectors should be present
for sel in PROBLEM_DICT[problem_type][correctness]:
if bool(isnt_marked):
world.wait_for(lambda _: world.is_css_not_present(sel)) # pylint: disable=cell-var-from-loop
has_expected = world.is_css_not_present(sel)
else:
world.css_find(sel) # css_find includes a wait_for pattern
has_expected = world.is_css_present(sel)
# As soon as we find the selector, break out of the loop
if has_expected:
break
# Expect that we found the expected selector
assert has_expected