Wikisoba project/JSON specification for version 2: Difference between revisions

From Wikimedia UK
Jump to navigation Jump to search
({{archived}})
 
(15 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{draft}}
{{archived}}


This is the working page for designing the JSON specification for Wikisoba Mark II.
This is the working page for designing the JSON specification for Wikisoba Mark II.


==Working assumption==
==Working assumptions==
The JSON for any one "slide" (adopting the term in Mark I) reads as
The JSON for any one "slide" (adopting the term in Mark I) reads as


<pre>{"configuration":configuration object, "data":data array]</pre>
<pre>{"configuration":configuration object, "meta":metadata object, "data":data array]</pre>


The configuration object includes giving the "slide type", which would be 1 for questions, out of about five types in all, the other types being ways of including supporting material. The array will have about a dozen items, including some the legacy of Moodle export as in the [[Wikisoba project/JSON-GIFT sample|JSON-GIFT sample]].
The configuration object includes giving the "slide type", which would be 1 for questions, out of about five types in all, the other types being ways of including supporting material. The object as explained below will have about a dozen fields.  


==Overall structure==
The metadata should be adequate to the needs of Creative Commons licensing, at the very least. Who authored the question and licensing should be covered, and import, adaptation and authoring information maintained. A Wikidata item here to indicate general topic area would be a good idea. (Should look into general metadata use.) The legacy of Moodle export shows up in the [[Wikisoba project/JSON-GIFT sample|JSON-GIFT sample]], and is to reconcile with what we would want.
 
==Overall structure of a quiz==
This should be as:
This should be as:


Line 30: Line 32:


==Choice-cloze type==
==Choice-cloze type==
This is a way of looking at some familiar types of question. There are is numerical parameter N. This is to be a versatile question type, including a number of typical multiple choice and missing text (cloze) formats.
In describing rendering, [(...)] means a rectangle, ((...)) means a rectangle with rounded corners.
===Simple examples===
First, the case N = 1.
A traditional choice question can be set up with


This is to be a versatile question type, including a number of typical multiple choice and missing text (cloze) formats. Its data array will look like
<pre>S(1) = "What is the capital of the USA?"
T(1) = {"true":"Washington DC", "dummy":{"false:"New York", "feedback":"New York is a large city in the USA, but not the capital."}}
S(2) = " "</pre>
 
This is to render as
 
:What is the capital of the USA? [(  )]
:((New York)) ((Washington))
 
with the second line randomly ordered. If the incorrect answer "New York" is submitted, the message "New York is a large city in the USA, but not the capital." should appear in the feedback area.
 
The missing word option would read as
<pre>S(1) = "The capital of the USA is"
T(1) = {"true":"Washington DC", "dummy":{"false:"New York", "feedback":"New York is a large city in the USA, but not the capital."}}
S(2) = "."</pre>
 
This is to render as
 
:The capital of the USA is [(  )].
:((New York)) ((Washington))
 
Another type of example is pure matching of missing phrases. Write the piece of text to reconstruct as
 
<pre>S(1) T(1) S(2) T(2) ... S(N) T(N) S(N + 1)</pre>
 
Then the software should display boxes [(  )] between the S(i):
 
<pre>S(1) [(  )] S(2) [(  )] S(3) ... S(N) [(  )] S(N + 1)</pre>
 
and display the boxes ((T(j))) below in a random order.
 
The user is supposed to drag the phrases into the boxes.
 
Here the boxes and matching strings can have corresponding colours randomly turned on, as a form of custom hint.
 
===General setting===
The data array will look like


<pre>[S-array, T-array]</pre>
<pre>[S-array, T-array]</pre>
Line 43: Line 89:
The T-array will be  
The T-array will be  


<pre>[T(1), T(2), ..., T(M)]</pre>
<pre>[T(1), T(2), ..., T(N)]</pre>


'''Example''' would be the case M = N, each T(j) is a single word, and the piece of text to reconstruct reads S(1) T(1) S(2) T(2) ... S(N) T(N) S(N + 1). Then the software should display boxes between the S(i), and display the words T(j) below in a random order. The user is supposed to drag the words into the boxes.
In general T(j) will be an object


In general T(j) will be an array
<pre>{"true":acceptable answer1, "true":acceptable answer2, ..., "dummy":{"false":string1, "feedback":response1}, "dummy":{"false":string2, "feedback":response2}, ... "hint":custom_hint", ... }</pre>


<pre>[T(j) strings, T(j) annotations]</pre>
Here the "true" components consist of strings each of which is an acceptable answer in the place between S(j) and S(j + 1). The object has to carry the other information in the question, namely any dummy answers and set responses,<ref>I.e. false answers, in multiple choice terms.</ref> custom hints, special reset information, special scoring information. The dummy answers actually shown consist of {"false":string, "feedback":response} but response setting can be null as the default.


Here <pre>T(j) strings</pre> is an object, consisting of strings each of which is an acceptable answer in the place between S(j) and S(j + 1). <pre>T(j) annotations</pre> has to carry the other information in the question, namely any dummy answers and set responses,<ref>I.e. false answers, in multiple choice terms.</ref> custom hints, special reset information, special scoring information. The array part for dummy answers need to contain pairs like [dummy answer, response]; where response can be set to null as the default.
===Five subtypes===
 
===Five examples===
Going beyond simply matching phrases into boxes (type A question), there are two types of added complexity, namely multiple choice, and colour coding. Colours can identify which answers relate to which box (i.e. calibrate the missing phrases a bit).
Going beyond simply matching phrases into boxes (type A question), there are two types of added complexity, namely multiple choice, and colour coding. Colours can identify which answers relate to which box (i.e. calibrate the missing phrases a bit).


Line 76: Line 120:
The colouring option clearly should be operated by the question subtype field, but may also be a custom hint. The dummy answers should be listed in the annotations.
The colouring option clearly should be operated by the question subtype field, but may also be a custom hint. The dummy answers should be listed in the annotations.


===Traditional multiple choice===
Note on multiple correct answers: they can be indicated by a thicker line round the input box, modified when input is made.
There is only one box, so N = 1 for multiple choice, and S(2) is null. Also M = 1. Only types B and D are relevant, and correspond to ...


==Configuration object==
==Configuration object==
Line 89: Line 132:
#(equality test) non-case sensitive equality of Unicode strings<ref>Caveat about JSON's use of escapes</ref>
#(equality test) non-case sensitive equality of Unicode strings<ref>Caveat about JSON's use of escapes</ref>
#(display type) graphical<ref>As opposed to buttons</ref>
#(display type) graphical<ref>As opposed to buttons</ref>
#(wikidata item) Use as a tag for topic area
#(response type) enabled<ref>I.e. display responses in the feedback area before moving on to next question. The default would be like this. Standard messages for: question completed correctly; question partially completed, as far as it goes; some incorrect answers where wrongly matched. If some dummy answers are chosen, use the response settings for those if some relevant ones are different from "null".</ref>
#(response type) enabled<ref>I.e. display responses in the feedback area before moving on to next question. The default would be like this. Standard messages for: question completed correctly; question partially completed, as far as it goes; some incorrect answers where wrongly matched. If some dummy answers are chosen, use the response settings for those if some relevant ones are different from "null".</ref>
#(hint type) i.e. go back | wikidata | custom
#(hint type) i.e. go back | wikidata or other link in a new tab | custom
#(reset type) back to question start
#(reset type) back to question start
#(scoring type) none<ref>E.g. could include a set penalty for incorrect answers</ref>
#(scoring type) "unit scoring"<ref>I.e. one point for each correct answer submitted, zero for each incorrect answer. The general case allows variable credit, multiple correct answers adding up to 1 (for example) by dividing up 100%, and penalties for incorrect answers</ref>
#(timing type) none
#(timing type) none


Also:
Also:
#Legacy fields: allow for some carried forward from Moodle
#Legacy fields: allow for some carried forward from Moodle
#Metadata to track provenance and attribution of question
#Metadata to track provenance and attribution of question. (wikidata item) to use as a tag for topic area.


In other words the working content is like:
In other words the working content is like:


{"slide type": , "question name": , "question type": , "question subtype": , "equality test": , "display type": , "wikidata item": , "response type": , "hint type": , "reset type": , "scoring type": , "timing type": }
{"slide type": , "question name": , "question type": , "question subtype": , "equality test": , "display type": , "response type": , "hint type": , "reset type": , "scoring type": , "timing type": }
 
"wikidata item": should be extracted from the metadata, if it is needed to generate a hint.


Legacy fields to include "usecase", "hidden", "old type", "description". Provenance fields to include "quizencodingversion", "old name", "description", also license and attribution information
Legacy fields to include "usecase", "hidden", "old type", "description". Provenance fields to include "quizencodingversion", "old name", "description", also license and attribution information
Line 108: Line 152:
==Notes==
==Notes==
{{reflist}}
{{reflist}}
[[Category:Wikisoba project]]

Latest revision as of 12:11, 16 February 2017

Historical
This page is kept as an archival reference.
If you want to raise a point about it, please start a discussion thread on the community forum.

This is the working page for designing the JSON specification for Wikisoba Mark II.

Working assumptions

The JSON for any one "slide" (adopting the term in Mark I) reads as

{"configuration":configuration object, "meta":metadata object, "data":data array]

The configuration object includes giving the "slide type", which would be 1 for questions, out of about five types in all, the other types being ways of including supporting material. The object as explained below will have about a dozen fields.

The metadata should be adequate to the needs of Creative Commons licensing, at the very least. Who authored the question and licensing should be covered, and import, adaptation and authoring information maintained. A Wikidata item here to indicate general topic area would be a good idea. (Should look into general metadata use.) The legacy of Moodle export shows up in the JSON-GIFT sample, and is to reconcile with what we would want.

Overall structure of a quiz

This should be as:

{"intro": , "sections":section array, "outro": }

with the section array as:

[section(1), section(2), ..., section(K)]

and a section as

{"nav1": , "slides":slide array, "nav2": }

Here a slide array is like

[slide(1), slide(2), ..., slide(K)]

and the slides are expected to be of about five types, type #1 being the question type.

Choice-cloze type

This is a way of looking at some familiar types of question. There are is numerical parameter N. This is to be a versatile question type, including a number of typical multiple choice and missing text (cloze) formats.

In describing rendering, [(...)] means a rectangle, ((...)) means a rectangle with rounded corners.

Simple examples

First, the case N = 1.

A traditional choice question can be set up with

S(1) = "What is the capital of the USA?"
T(1) = {"true":"Washington DC", "dummy":{"false:"New York", "feedback":"New York is a large city in the USA, but not the capital."}}
S(2) = " "

This is to render as

What is the capital of the USA? [( )]
((New York)) ((Washington))

with the second line randomly ordered. If the incorrect answer "New York" is submitted, the message "New York is a large city in the USA, but not the capital." should appear in the feedback area.

The missing word option would read as

S(1) = "The capital of the USA is"
T(1) = {"true":"Washington DC", "dummy":{"false:"New York", "feedback":"New York is a large city in the USA, but not the capital."}}
S(2) = "."

This is to render as

The capital of the USA is [( )].
((New York)) ((Washington))

Another type of example is pure matching of missing phrases. Write the piece of text to reconstruct as

S(1) T(1) S(2) T(2) ... S(N) T(N) S(N + 1)

Then the software should display boxes [( )] between the S(i):

S(1) [(   )] S(2) [(   )] S(3) ... S(N) [(   )] S(N + 1)

and display the boxes ((T(j))) below in a random order.

The user is supposed to drag the phrases into the boxes.

Here the boxes and matching strings can have corresponding colours randomly turned on, as a form of custom hint.

General setting

The data array will look like

[S-array, T-array]

Here the S-array will be

[S(1), S(2), ..., S(N + 1)]

and the S(i) will be strings, possibly null, subject to JSON constraints[1] and the constraint that S(N + 1) is either null or ends in a terminating punctuation mark.[2]

The T-array will be

[T(1), T(2), ..., T(N)]

In general T(j) will be an object

{"true":acceptable answer1, "true":acceptable answer2, ..., "dummy":{"false":string1, "feedback":response1}, "dummy":{"false":string2, "feedback":response2}, ... "hint":custom_hint", ... }

Here the "true" components consist of strings each of which is an acceptable answer in the place between S(j) and S(j + 1). The object has to carry the other information in the question, namely any dummy answers and set responses,[3] custom hints, special reset information, special scoring information. The dummy answers actually shown consist of {"false":string, "feedback":response} but response setting can be null as the default.

Five subtypes

Going beyond simply matching phrases into boxes (type A question), there are two types of added complexity, namely multiple choice, and colour coding. Colours can identify which answers relate to which box (i.e. calibrate the missing phrases a bit).

Type Plain Coloured
Matching A n/a
Check box B C
Complex D E

The colouring option clearly should be operated by the question subtype field, but may also be a custom hint. The dummy answers should be listed in the annotations.

Note on multiple correct answers: they can be indicated by a thicker line round the input box, modified when input is made.

Configuration object

This is a provisional listing of some "fields" that should occur in the configuration object, mostly with default values.

Working content:

  1. (Slide type) 1
  2. (question name)
  3. (question type) choice/cloze
  4. (question subtype) A to E[4]
  5. (equality test) non-case sensitive equality of Unicode strings[5]
  6. (display type) graphical[6]
  7. (response type) enabled[7]
  8. (hint type) i.e. go back | wikidata or other link in a new tab | custom
  9. (reset type) back to question start
  10. (scoring type) "unit scoring"[8]
  11. (timing type) none

Also:

  1. Legacy fields: allow for some carried forward from Moodle
  2. Metadata to track provenance and attribution of question. (wikidata item) to use as a tag for topic area.

In other words the working content is like:

{"slide type": , "question name": , "question type": , "question subtype": , "equality test": , "display type": , "response type": , "hint type": , "reset type": , "scoring type": , "timing type": }

"wikidata item": should be extracted from the metadata, if it is needed to generate a hint.

Legacy fields to include "usecase", "hidden", "old type", "description". Provenance fields to include "quizencodingversion", "old name", "description", also license and attribution information

Notes

  1. I.e. can be Unicode, but with certain characters escaped.
  2. I.e. ends in . or ? or !.
  3. I.e. false answers, in multiple choice terms.
  4. See previous section
  5. Caveat about JSON's use of escapes
  6. As opposed to buttons
  7. I.e. display responses in the feedback area before moving on to next question. The default would be like this. Standard messages for: question completed correctly; question partially completed, as far as it goes; some incorrect answers where wrongly matched. If some dummy answers are chosen, use the response settings for those if some relevant ones are different from "null".
  8. I.e. one point for each correct answer submitted, zero for each incorrect answer. The general case allows variable credit, multiple correct answers adding up to 1 (for example) by dividing up 100%, and penalties for incorrect answers