First set of fixes from the pull request This does not include some of the testing files. The textannotation and videoannotation test files are not ready. waiting for an answer on the issue. Deleted token line in api.py and added test for token generator Added notes_spec.coffee remove spec file fixed minor error with the test fixes some quality errors fixed unit test fixed unit test added advanced module Added notes_spec.coffee remove spec file Quality and Testing Coverage 1. in test_textannotation.py I already check for line 75 as it states in the diff in line 43, same with test_videoanntotation 2. Like you said, exceptions cannot be checked for firebase_token_generator.py. The version of python that is active on the edx server is 2.7 or higher, but the code is there for correctness. Error checking works the same way. 3. I added a test for student/views/.py within tests and deleted the unused secret assignment. 4. test_token_generator.py is now its own file Added Secret Token data input fixed token generator Annotation Tools in Place The purpose of this pull request is to install two major modules: (1) a module to annotate text and (2) a module to annotate video. In either case an instructor can declare them in advanced settings under advanced_modules and input content (HTML in text, mp4 or YouTube videos for video). Students will be able to highlight portions and add their comments as well as reply to each other. There needs to be a storage server set up per course as well as a secret token to talk with said storage. Changes: 1. Added test to check for the creation of a token in tests.py (along with the rest of the tests for student/view.py) 2. Removed items in cms pertaining to annotation as this will only be possible in the lms 3. Added more comments to firebase_token_generator.py, the test files, students/views.py 4. Added some internationalization stuff to textannotation.html and videoannotation.html. I need some help with doing it in javascript, but the html is covered. incorporated lib for traslate fixed quality errors fixed my notes with catch token Text and Video Annotation Modules - First Iteration The following code-change is the first iteration of the modules for text and video annotation. Installing Modules: 1. Under “Advanced Settings”, add “textannotation” and “videoannotation” to the list of advanced_modules. 2. Add link to an external storage for annotations under “annotation_storage_url” 3. Add the secret token for talking with said storage under “annotation_token_secret” Using Modules 1. When creating new unit, you can find Text and Video annotation modules under “Advanced” component 2. Make sure you have either Text or Video in one unit, but not both. 3. Annotations are only allowed on Live/Public version and not Studio. Added missing templates and fixed more of the quality errors Fixed annotator not existing issue in cmd and tried to find the get_html() from the annotation module class to the descriptor Added a space after # in comments Fixed issue with an empty Module and token links Added licenses and fixed vis naming scheme and location.
100 lines
3.4 KiB
Python
100 lines
3.4 KiB
Python
'''
|
|
Firebase - library to generate a token
|
|
License: https://github.com/firebase/firebase-token-generator-python/blob/master/LICENSE
|
|
Tweaked and Edited by @danielcebrianr and @lduarte1991
|
|
|
|
This library will take either objects or strings and use python's built-in encoding
|
|
system as specified by RFC 3548. Thanks to the firebase team for their open-source
|
|
library. This was made specifically for speaking with the annotation_storage_url and
|
|
can be used and expanded, but not modified by anyone else needing such a process.
|
|
'''
|
|
from base64 import urlsafe_b64encode
|
|
import hashlib
|
|
import hmac
|
|
import sys
|
|
try:
|
|
import json
|
|
except ImportError:
|
|
import simplejson as json
|
|
|
|
__all__ = ['create_token']
|
|
|
|
TOKEN_SEP = '.'
|
|
|
|
|
|
def create_token(secret, data):
|
|
'''
|
|
Simply takes in the secret key and the data and
|
|
passes it to the local function _encode_token
|
|
'''
|
|
return _encode_token(secret, data)
|
|
|
|
|
|
if sys.version_info < (2, 7):
|
|
def _encode(bytes_data):
|
|
'''
|
|
Takes a json object, string, or binary and
|
|
uses python's urlsafe_b64encode to encode data
|
|
and make it safe pass along in a url.
|
|
To make sure it does not conflict with variables
|
|
we make sure equal signs are removed.
|
|
More info: docs.python.org/2/library/base64.html
|
|
'''
|
|
encoded = urlsafe_b64encode(bytes(bytes_data))
|
|
return encoded.decode('utf-8').replace('=', '')
|
|
else:
|
|
def _encode(bytes_info):
|
|
'''
|
|
Same as above function but for Python 2.7 or later
|
|
'''
|
|
encoded = urlsafe_b64encode(bytes_info)
|
|
return encoded.decode('utf-8').replace('=', '')
|
|
|
|
|
|
def _encode_json(obj):
|
|
'''
|
|
Before a python dict object can be properly encoded,
|
|
it must be transformed into a jason object and then
|
|
transformed into bytes to be encoded using the function
|
|
defined above.
|
|
'''
|
|
return _encode(bytearray(json.dumps(obj), 'utf-8'))
|
|
|
|
|
|
def _sign(secret, to_sign):
|
|
'''
|
|
This function creates a sign that goes at the end of the
|
|
message that is specific to the secret and not the actual
|
|
content of the encoded body.
|
|
More info on hashing: http://docs.python.org/2/library/hmac.html
|
|
The function creates a hashed values of the secret and to_sign
|
|
and returns the digested values based the secure hash
|
|
algorithm, 256
|
|
'''
|
|
def portable_bytes(string):
|
|
'''
|
|
Simply transforms a string into a bytes object,
|
|
which is a series of immutable integers 0<=x<=256.
|
|
Always try to encode as utf-8, unless it is not
|
|
compliant.
|
|
'''
|
|
try:
|
|
return bytes(string, 'utf-8')
|
|
except TypeError:
|
|
return bytes(string)
|
|
return _encode(hmac.new(portable_bytes(secret), portable_bytes(to_sign), hashlib.sha256).digest()) # pylint: disable=E1101
|
|
|
|
|
|
def _encode_token(secret, claims):
|
|
'''
|
|
This is the main function that takes the secret token and
|
|
the data to be transmitted. There is a header created for decoding
|
|
purposes. Token_SEP means that a period/full stop separates the
|
|
header, data object/message, and signatures.
|
|
'''
|
|
encoded_header = _encode_json({'typ': 'JWT', 'alg': 'HS256'})
|
|
encoded_claims = _encode_json(claims)
|
|
secure_bits = '%s%s%s' % (encoded_header, TOKEN_SEP, encoded_claims)
|
|
sig = _sign(secret, secure_bits)
|
|
return '%s%s%s' % (secure_bits, TOKEN_SEP, sig)
|