- Description:
SAVEE (Surrey Audio-Visual Expressed Emotion) is an emotion recognition dataset. It consists of recordings from 4 male actors in 7 different emotions, 480 British English utterances in total. The sentences were chosen from the standard TIMIT corpus and phonetically-balanced for each emotion. This release contains only the audio stream from the original audio-visual recording.
The data is split so that the training set consists of 2 speakers, and both the validation and test set consists of samples from 1 speaker, respectively.
Additional Documentation: Explore on Papers With Code
Homepage: http://kahlan.eps.surrey.ac.uk/savee/
Source code:
tfds.datasets.savee.Builder
Versions:
1.0.0
(default): No release notes.
Download size:
Unknown size
Dataset size:
259.15 MiB
Manual download instructions: This dataset requires you to download the source data manually into
download_config.manual_dir
(defaults to~/tensorflow_datasets/downloads/manual/
):
manual_dir should contain the file AudioData.zip. This file should be under Data/Zip/AudioData.zip in the dataset folder provided upon registration. You need to register at http://personal.ee.surrey.ac.uk/Personal/P.Jackson/SAVEE/Register.html in order to get the link to download the dataset.Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
120 |
'train' |
240 |
'validation' |
120 |
- Feature structure:
FeaturesDict({
'audio': Audio(shape=(None,), dtype=int64),
'label': ClassLabel(shape=(), dtype=int64, num_classes=7),
'speaker_id': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
audio | Audio | (None,) | int64 | |
label | ClassLabel | int64 | ||
speaker_id | Tensor | string |
Supervised keys (See
as_supervised
doc):('audio', 'label')
Figure (tfds.show_examples): Not supported.
Examples (tfds.as_dataframe):
- Citation:
@inproceedings{Vlasenko_combiningframe,
author = {Vlasenko, Bogdan and Schuller, Bjorn and Wendemuth, Andreas and Rigoll, Gerhard},
year = {2007},
month = {01},
pages = {2249-2252},
title = {Combining frame and turn-level information for robust recognition of emotions within speech},
journal = {Proceedings of Interspeech}
}