Shortcuts

lightrft.datasets.omnirewardbench

class lightrft.datasets.omnirewardbench.OmniRewardBenchT2AHandler[source]

Bases: OmniRewardBenchT2IHandler

Data Handler for OmniRewardBench text-to-audio human preferences benchmark. Process for scalar reward model training of pairwise-ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

get_media_info(item: Dict[str, Any]) Dict[str, Dict[str, str]][source]

Extract media info (paths) for the two audios.

Parameters:

item (Dict[str, Any]) – A data item from load_data

Returns:

Dict containing local paths for ‘audio1’ and ‘audio2’

Return type:

Dict[str, Dict[str, str]]

Example:

info = handler.get_media_info(item)
parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], List[Dict], Dict][source]

Parse a data item from OmniRewardBench-T2A into messages and metadata.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded visual content

  • config (Dict[str, Any]) – Configuration for task instructions

Returns:

A tuple of (messages0, messages1, metadata)

Return type:

Tuple[List[Dict], List[Dict], Dict]

Example:

msg0, msg1, other = handler.parse_item(item, media_content, config)
task_type = 'text-to-audio'
class lightrft.datasets.omnirewardbench.OmniRewardBenchT2IGRMHandler[source]

Bases: OmniRewardBenchT2IHandler

Data Handler for OmniRewardBench text-to-image human preferences benchmark. Process for generative reward model training of pair-wise ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], List[Dict], Dict][source]

Parse a data item from OmniRewardBench-T2I into one message and metadata. For generative reward model training in pair-wise ranking task.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded visual content

  • config (Dict[str, Any]) – Configuration for task instructions and max_pixels

Returns:

A tuple of (messages, metadata)

Return type:

Tuple[List[Dict], Dict]

Example:

messages, other = handler.parse_item(item, media_content, config)
class lightrft.datasets.omnirewardbench.OmniRewardBenchT2IHandler[source]

Bases: BaseDataHandler

Data Handler for OmniRewardBench text-to-image human preferences benchmark. Process for scalar reward model training of pairwise-ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

get_media_info(item: Dict[str, Any]) Dict[str, Dict[str, str]][source]

Extract media info (paths) for the two images.

Parameters:

item (Dict[str, Any]) – A data item from load_data

Returns:

Dict containing local paths for ‘image1’ and ‘image2’

Return type:

Dict[str, Dict[str, str]]

Example:

info = handler.get_media_info(item)
load_data(path: str) List[Dict[str, Any]][source]

Loads data from parquet file.

Parameters:

path (str) – Path to the parquet file

Returns:

List of samples with ‘data_root’ attached

Return type:

List[Dict[str, Any]]

Example:

handler = OmniRewardBenchT2IHandler()
data = handler.load_data("path/to/OmniRewardBench/data.parquet")
parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], List[Dict], Dict][source]

Parse a data item from OmniRewardBench-T2I into messages and metadata.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded media content with ‘image1’ and ‘image2’ keys.

  • config (Dict[str, Any]) – Configuration for task instructions and max_pixels

Returns:

A tuple of (messages0, messages1, metadata)

Return type:

Tuple[List[Dict], List[Dict], Dict]

Example:

msg0, msg1, other = handler.parse_item(item, media_content, config)
task_type = 'text-to-image'
class lightrft.datasets.omnirewardbench.OmniRewardBenchT2IPairHandler[source]

Bases: OmniRewardBenchT2IHandler

Data Handler for OmniRewardBench text-to-image human preferences benchmark. Process for generative reward model on pair-wise ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], Dict][source]

Parse a data item into generative messages and metadata.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded visual content

  • config (Dict[str, Any]) – Configuration for task instructions

Returns:

A tuple of (messages, metadata)

Return type:

Tuple[List[Dict], Dict]

Example:

messages, other = handler.parse_item(item, media_content, config)
class lightrft.datasets.omnirewardbench.OmniRewardBenchT2VHandler[source]

Bases: OmniRewardBenchT2IHandler

Data Handler for OmniRewardBench text-to-video human preferences benchmark. Process for scalar reward model training of pairwise-ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

get_media_info(item: Dict[str, Any]) Dict[str, Dict[str, str]][source]

Extract media info (paths) for the two videos.

Parameters:

item (Dict[str, Any]) – A data item from load_data

Returns:

Dict containing local paths for ‘video1’ and ‘video2’

Return type:

Dict[str, Dict[str, str]]

Example:

info = handler.get_media_info(item)
parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], List[Dict], Dict][source]

Parse a data item from OmniRewardBench-T2V into messages and metadata.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded visual content

  • config (Dict[str, Any]) – Configuration for task instructions, max_pixels, and fps

Returns:

A tuple of (messages0, messages1, metadata)

Return type:

Tuple[List[Dict], List[Dict], Dict]

Example:

msg0, msg1, other = handler.parse_item(item, media_content, config)
task_type = 'text-to-video'
class lightrft.datasets.omnirewardbench.OmniRewardBenchT2VPairHandler[source]

Bases: OmniRewardBenchT2VHandler

Data Handler for OmniRewardBench text-to-video human preferences benchmark. Process for generative reward model on pair-wise ranking task.

Paper: https://huggingface.co/papers/2510.23451 Dataset Repo: https://huggingface.co/datasets/HongbangYuan/OmniRewardBench

parse_item(item: Dict[str, Any], media_content: Dict[str, Any], config: Dict[str, Any]) Tuple[List[Dict], Dict][source]

Parse a data item into generative messages and metadata.

Parameters:
  • item (Dict[str, Any]) – The raw data item

  • media_content (Dict[str, Any]) – Loaded visual content

  • config (Dict[str, Any]) – Configuration for task instructions, max_pixels, and fps

Returns:

A tuple of (messages, metadata)

Return type:

Tuple[List[Dict], Dict]

Example:

messages, other = handler.parse_item(item, media_content, config)