Reproducible Subjective Evaluation (ReSEval)
Reproducible Subjective Evaluation (ReSEval)¶
- reseval.create(config: Union[str, bytes, os.PathLike], directory: Union[str, bytes, os.PathLike], local: bool = False, production: bool = False, detach: bool = False) str [source]¶
Setup a subjective evaluation
- Parameters
config – The configuration file
directory – The directory containing the files to evaluate
local – Run subjective evaluation locally
production – Deploy the subjective evaluation to crowdsource participants
detach – If running locally, detaches the server process
- Returns
The name of the evaluation, as given in the configuration file
- Return type
str
- reseval.destroy(name: str, force: bool = False)[source]¶
Destroys an evaluation’s storage, database, server, and crowdsource task
- Parameters
name – The name of the evaluation to destroy
force – Destroy evaluation resources even if it is still running. Pays all participants who have taken the evaluation.
- reseval.extend(name: str, participants: int, local: bool = False, production: bool = False)[source]¶
Extend a subjective evaluation
- Parameters
name – The name of the evaluation to extend
participants – The number of new participants to add to the evaluation
local – Run subjective evaluation locally
production – Deploy the subjective evaluation to crowdsource participants
- reseval.pay(name: str)[source]¶
Pay participants of a subjective evaluation
- Parameters
name – The name of the evaluation to pay for
- reseval.results(name: str, directory: Union[str, bytes, os.PathLike]) dict [source]¶
Get the results of a subjective evaluation
- Parameters
name – The name of the subjective evaluation to retrieve results for
directory – The directory to save results
- Returns
Evaluation results
- Return type
dict
- reseval.run(config: str, directory: Union[str, bytes, os.PathLike], results_directory: Union[str, bytes, os.PathLike] = PosixPath('.'), local: bool = False, production: bool = False, interval: int = 120) dict [source]¶
Perform subjective evaluation
- Parameters
config – The configuration file
directory – The directory containing the files to evaluate
local – Run subjective evaluation locally
production – Deploy the subjective evaluation to crowdsource participants
interval – The time between monitoring updates in seconds
- Returns
Evaluation results
- Return type
dict