Auto Sizzle Reel
A deep dive into the technical implementation of automated video content curation using multi-cloud AI services
Project Genesis: HBO Max Innovation Collaboration
Back in 2020, I was working with the HBO Max product innovation team on a research project to figure out if we could use AI to help automatically generate sizzle reels from full feature films. I felt like this was a great opportunity to work with a real customer to solve a real problem. I have been a big fan of HBO growing up and was very excited about the opportunity to work with them.
I was working at Warner Media as a Principal Architect and was given the opportunity to build an AI/ML platform called ContentAI with John Ritsema. You could say it was a bit ahead of its time. In this blog post I'll share some of the key insights and learnings from the project.
The Core Challenge
How can we use AI to help automatically generate a sizzle reel from full feature films?
Project Goals
The project had two primary objectives:
- Automatically generate a sizzle reel from a feature film
- Give the content creative services team metadata to easily find the scenes/moments they can use to stitch together the sizzle reel themselves
Business Requirements Analysis
There are several building blocks that make a good sizzle reel for our use case. The sizzle reels we wanted to generate had the following requirements:
Technical Metadata Requirements
- Total duration: Around 60 seconds
- Shot duration: No longer than 3 seconds per shot
- Shot characteristics:
- A celebrity in it
- A lot of movement/action
- No lip flap (actors talking)
- Various shot types (establishing shots, medium shots, close-ups, etc.)
- Various types of high visual effects
- Various types of content tags (kissing, family, action, etc.)
Sizzle Reel Construction Rules
When designing a sizzle reel, we implemented business rules considering:
SIZZLE_REEL_RULES = {
'length_constraints': {
'total_duration_seconds': 60,
'max_shot_duration_seconds': 3,
'min_shot_duration_seconds': 1
},
'content_requirements': {
'celebrity_presence_required': True,
'high_movement_threshold': 0.7,
'exclude_dialogue_heavy_scenes': True,
'shot_variety_required': ['establishing', 'medium', 'close_up'],
'visual_effects_inclusion': True
},
'narrative_structure': {
'tension_building': True,
'resolution_sequences': True,
'engagement_optimization': True,
'story_arc_preservation': True
}
}
Narrative Design Philosophy: We need to tell a story with the scenes. We want to keep the viewer engaged and wanting more. We do this by selecting scenes that build tension and then resolving them in subsequent scenes. In the future we can tailor the sizzle reel generation to specific viewer's tastes.
The Technical Challenge: Scalable Content Intelligence
The entertainment industry generates petabytes of video content annually, but manually curating highlights for marketing and social media remains a bottleneck. The Sizzle Reel Extractor POC addresses this through a sophisticated rule-based AI pipeline that processes video content across multiple dimensions to automatically identify and extract compelling moments.
This technical case study examines the architecture, business rules engine, and implementation details of a system built in collaboration with HBO Max Innovation team to automate short-form content creation at enterprise scale.
Core Technical Architecture
The system implements a rule-based content intelligence engine that processes video through multiple AI services, applying configurable business rules to score and extract optimal segments:
Multi-Modal Analysis Pipeline
# Core extraction workflow implementing HBO Max requirements
def run_segments():
# Business rule: Maximum video duration constraint
max_seconds = config.get('max_video_duration', 3600)
# Apply confidence thresholds per service
min_confidence = config.get('min_confidence', 0.5)
# Celebrity filtering rules (HBO Max requirement: celebrity presence)
celebs_filter = config.get('celebsFilter', [])
# Process through rule engine
segments = shot_segments(max_seconds, min_confidence)
celebrities = celebs_details(max_seconds, min_confidence, celebs_filter)
actions = actions_details(max_seconds, min_confidence, action_filter)
# Apply HBO Max sizzle reel construction rules
return apply_sizzle_reel_rules(segments, celebrities, actions)
def apply_sizzle_reel_rules(segments, celebrities, actions):
"""
Implement HBO Max specific business requirements
"""
filtered_segments = []
for segment in segments:
# Rule: Shot duration constraint (max 3 seconds)
duration = (segment['endTimestampMillis'] - segment['startTimestampMillis']) / 1000
if duration > 3.0:
continue
# Rule: Celebrity presence required
has_celebrity = any(
celeb['timestamp'] >= segment['startTimestampMillis'] and
celeb['timestamp'] <= segment['endTimestampMillis']
for celeb in celebrities
)
if not has_celebrity:
continue
# Rule: High movement/action required
has_action = any(
action['timestamp'] >= segment['startTimestampMillis'] and
action['timestamp'] <= segment['endTimestampMillis'] and
action['confidence'] >= 0.7 # High movement threshold
for action in actions
)
if not has_action:
continue
# Rule: Exclude dialogue-heavy scenes (no lip flap)
if has_dialogue_activity(segment):
continue
filtered_segments.append(segment)
# Rule: Target 60-second total duration with shot variety
return optimize_for_sizzle_reel(filtered_segments, target_duration=60)
Celebrity Recognition with Business Rules
The system implements sophisticated celebrity detection with configurable filtering:
def celebs_details(max_seconds, min_confidence, filter):
# Rule: Only process celebrities in filter list
for celebrity_detection in aws_rekognition_results:
name = celebrity_detection["Celebrity"]["Name"].lower()
confidence = celebrity_detection["Celebrity"]["Confidence"]
# Business rule: Confidence threshold
if confidence >= min_confidence:
# Business rule: Celebrity whitelist filtering
if not filter or name in [f.lower() for f in filter]:
# Extract with precise timestamps
timestamp_ms = celebrity_detection["Timestamp"]
# Business rule: Temporal bounds checking
if timestamp_ms/1000 < max_seconds:
segments.append(create_segment(celebrity_detection))
Action Detection and Filtering
The system processes 69 predefined action categories with configurable filtering:
// actions_filter.json - Business rules for content types
[
"Quake", "Video Gaming", "Play", "Selfie", "Laughing",
"Flying", "Smile", "Knitting", "Make Out", "Kneeling",
"Sleeping", "Kicking", "Swimming", "Stretch", "Boxing",
"Surfing", "Wrestling", "Rock Climbing", "Diving"
]
Shot Segmentation with Quality Metrics
def shot_segments(max_seconds, min_confidence):
# Business rule: Shot change detection with confidence scoring
for segment in aws_rekognition_segments:
confidence = segment["ShotSegment"]["Confidence"]
# Rule: Quality threshold enforcement
if confidence >= min_confidence:
# Rule: Duration bounds
duration_ms = segment["DurationMillis"]
if is_valid_duration(duration_ms):
# Apply temporal formatting rules
segment_data = {
"confidence": confidence,
"startTimestampMillis": segment["StartTimestampMillis"],
"endTimestampMillis": segment["EndTimestampMillis"],
"startTimecodeFormatted": format_timedelta(start_time),
"endTimecodeFormatted": format_timedelta(end_time)
}
Business Rules Engine Architecture
The system implements a configurable rules engine that orchestrates multiple cloud AI services through the ContentAI Platform:
Service Orchestration Graph
graph TD
A[Video Input] --> B[Metadata Extraction]
A --> C[Azure Video Indexer]
A --> D[GCP Upload]
A --> E[AWS Transcribe]
A --> F[AWS Rekognition Video]
D --> G[GCP Video Intelligence Labels]
D --> H[GCP Object Tracking]
F --> I[Celebrity Detection]
F --> J[Content Moderation]
F --> K[Face Analysis]
F --> L[Label Detection]
F --> M[Shot Segmentation]
F --> N[Text Detection]
B --> O[Sizzle Reel Processor]
C --> O
G --> O
H --> O
E --> O
I --> O
J --> O
K --> O
L --> O
M --> O
N --> O
Multi-Cloud Service Integration
# ContentAI Platform workflow definition
WORKFLOW_GRAPH = """
digraph {
metadata -> sizzle_reel;
azure_videoindexer -> sizzle_reel;
gcp_upload -> gcp_videointelligence_label -> sizzle_reel;
gcp_upload -> gcp_videointelligence_object_tracking -> sizzle_reel;
aws_transcribe -> sizzle_reel;
aws_rekognition_video_celebs -> sizzle_reel;
aws_rekognition_video_content_moderation -> sizzle_reel;
aws_rekognition_video_faces -> sizzle_reel;
aws_rekognition_video_labels -> sizzle_reel;
aws_rekognition_video_segments -> sizzle_reel;
aws_rekognition_video_text_detect -> sizzle_reel;
}
"""
Runtime Configuration and Business Rules
# Example: Game of Thrones processing configuration
CONFIG_GOT = {
"celebsFilter": [
"Kit Harrington", "Peter Dinklage", "Emilia Clarke",
"Lena Headey", "Maisie Williams", "Sophie Turner"
],
"max_video_duration": 3600, # 1 hour processing limit
"min_confidence": 0.7, # High confidence for celebrity detection
"action_filters": ["sword fighting", "battle", "dramatic dialogue"],
"content_moderation": {
"violence_threshold": 0.8,
"suggestive_threshold": 0.9
}
}
# Example: Aquaman processing configuration
CONFIG_AQUAMAN = {
"celebsFilter": [
"jason momoa", "amber heard", "willem dafoe",
"patrick wilson", "yahya abdul-mateen ii",
"temuera morrison", "nicole kidman"
],
"max_video_duration": 7200, # 2 hour processing limit
"min_confidence": 0.6,
"action_filters": ["swimming", "diving", "fighting", "flying"]
}
Real-World Applications: From Concept to Production
The project has already been tested on a diverse range of content:
- Game of Thrones: Automatically extracting epic battle scenes and character moments
- Aquaman: Identifying action sequences and celebrity appearances
- Rick & Morty: Finding the funniest animated moments
- Sesame Street: Highlighting educational and entertaining segments
- Classic Films: Creating modern trailers for timeless content like "Singing in the Rain"
Each use case demonstrates the system's versatility in handling different genres, from live-action drama to animation to children's content.
The HBO Max Re:Boot Vision
This technology directly supports HBO Max's Re:Boot initiative - a complete redesign of the streaming experience that will leverage significant amounts of in-line sizzle video content. Previously, creating this volume of short-form content manually would have been prohibitively expensive and time-consuming.
The POC helps determine whether AI can produce impactful results for this use case and identifies exactly where the gaps are in current technology.
Advanced Business Rules Implementation
Segment Scoring and Ranking Algorithm
def percentages(all_clips, only):
"""
Business rule: Rank segments by weighted scoring algorithm
Combines multiple AI confidence scores with business priorities
"""
# Rule: Weight different content types by business value
SCORING_WEIGHTS = {
'celebrity_confidence': 0.4, # High weight for star power
'action_confidence': 0.3, # Medium weight for engagement
'shot_quality': 0.2, # Technical quality baseline
'dialogue_sentiment': 0.1 # Narrative context
}
scored_segments = []
for clip in all_clips:
total_score = 0
for metric, weight in SCORING_WEIGHTS.items():
total_score += clip.get(metric, 0) * weight
# Business rule: Minimum viable score threshold
if total_score >= 0.6:
scored_segments.append({
'clip': clip,
'score': total_score,
'start_seconds': get_start_seconds(clip)
})
# Rule: Sort by composite score, then by temporal position
return sorted(scored_segments, key=lambda x: (x['score'], x['start_seconds']), reverse=True)
def get_start_seconds(clip):
"""Convert timestamp to seconds for temporal ordering"""
return int(clip['startTimestampMillis'] / 1000)
Content Moderation Business Rules
def content_moderation_details(max_seconds, min_confidence, filter):
"""
Apply content moderation rules based on platform requirements
"""
MODERATION_RULES = {
'Explicit Nudity': {'threshold': 0.95, 'action': 'exclude'},
'Suggestive': {'threshold': 0.85, 'action': 'flag'},
'Violence': {'threshold': 0.80, 'action': 'age_gate'},
'Visually Disturbing': {'threshold': 0.75, 'action': 'flag'},
'Rude Gestures': {'threshold': 0.70, 'action': 'review'},
'Drugs': {'threshold': 0.60, 'action': 'flag'},
'Tobacco': {'threshold': 0.50, 'action': 'age_gate'},
'Alcohol': {'threshold': 0.40, 'action': 'age_gate'}
}
moderated_segments = []
for detection in aws_moderation_results:
for label in detection['ModerationLabels']:
label_name = label['Name']
confidence = label['Confidence']
if label_name in MODERATION_RULES:
rule = MODERATION_RULES[label_name]
# Apply business rule threshold
if confidence >= rule['threshold']:
action = rule['action']
# Business logic for different actions
if action == 'exclude':
continue # Skip this segment entirely
elif action == 'flag':
segment['moderation_flag'] = label_name
elif action == 'age_gate':
segment['age_restriction'] = True
elif action == 'review':
segment['requires_review'] = True
moderated_segments.append(segment)
return moderated_segments
Temporal Segmentation and Clip Creation
def create_clip(input_path, output_path, start, duration):
"""
Business rule: Apply padding and quality constraints to clips
"""
# Rule: Add buffer time to avoid abrupt cuts
start_adjusted = float(start) + 0.1
duration_adjusted = duration - 0.2
# Rule: Minimum clip duration for viewer engagement
MIN_CLIP_DURATION = 2.0 # seconds
MAX_CLIP_DURATION = 30.0 # seconds for social media
if duration_adjusted < MIN_CLIP_DURATION:
duration_adjusted = MIN_CLIP_DURATION
elif duration_adjusted > MAX_CLIP_DURATION:
duration_adjusted = MAX_CLIP_DURATION
# Use FFmpeg with quality preservation rules
cmd = [
'ffmpeg', '-ss', str(start_adjusted),
'-i', input_path,
'-c', 'copy', # Rule: Preserve original encoding
'-t', str(duration_adjusted),
output_path,
'-hide_banner', '-loglevel', 'error', '-y'
]
subprocess.call(cmd)
def merge_clips(file_input_path, output_path):
"""
Business rule: Concatenate clips with transition rules
"""
# Rule: Use copy codec to maintain quality
cmd = [
'ffmpeg', '-f', 'concat',
'-i', file_input_path,
'-c', 'copy', # Avoid re-encoding
output_path,
'-hide_banner', '-loglevel', 'error', '-y'
]
subprocess.call(cmd)
Transcription Analysis with Sentiment Rules
def transcription_details(max_seconds):
"""
Extract dialogue with sentiment and timing rules
"""
SENTIMENT_RULES = {
'high_emotion': ['excited', 'angry', 'passionate', 'dramatic'],
'key_moments': ['climax', 'revelation', 'conflict', 'resolution'],
'quotable': ['memorable', 'witty', 'profound', 'iconic']
}
transcription_segments = []
for item in aws_transcribe_results['results']['items']:
if item['type'] == 'pronunciation':
# Rule: Extract high-confidence words only
if float(item.get('confidence', 0)) >= 0.9:
# Apply sentiment analysis rules
word = item['content'].lower()
# Business rule: Identify emotionally significant dialogue
for category, keywords in SENTIMENT_RULES.items():
if any(keyword in word for keyword in keywords):
segment = {
'word': item['content'],
'start_time': float(item['start_time']),
'end_time': float(item['end_time']),
'confidence': float(item['confidence']),
'sentiment_category': category
}
transcription_segments.append(segment)
return transcription_segments
Data Models and Quality Metrics
Celebrity Detection Data Structure
# AWS Rekognition Celebrity Detection Response
celebrity_detection = {
"Timestamp": 154487, # Millisecond precision
"Celebrity": {
"Urls": ["www.imdb.com/name/nm0002125"],
"Name": "Tonya Harding",
"Id": "1Hh5s19",
"Confidence": 50.0, # Business rule threshold
"BoundingBox": {
"Width": 0.8135416507720947,
"Height": 0.7092592716217041,
"Left": 0.12708333134651184,
"Top": 0.15000000596046448
},
"Face": {
"Landmarks": [
{"Type": "eyeLeft", "X": 0.4496273398399353, "Y": 0.24316759407520294},
{"Type": "eyeRight", "X": 0.550874650478363, "Y": 0.22805465757846832},
{"Type": "nose", "X": 0.5244039297103882, "Y": 0.27304258942604065},
{"Type": "mouthLeft", "X": 0.48769256472587585, "Y": 0.441604882478714},
{"Type": "mouthRight", "X": 0.5692977905273438, "Y": 0.41509345173835754}
],
"Pose": {
"Roll": -6.36013126373291,
"Yaw": 15.727333068847656,
"Pitch": 29.050216674804688
},
"Quality": {
"Brightness": 28.25617218017578,
"Sharpness": 78.74752044677734
},
"Confidence": 99.57869720458984
}
}
}
Business Rules for Data Quality
def validate_detection_quality(detection):
"""
Apply business rules for detection quality thresholds
"""
QUALITY_RULES = {
'min_face_confidence': 95.0, # High confidence for face detection
'min_celebrity_confidence': 50.0, # Lower threshold for celebrity ID
'min_brightness': 20.0, # Avoid too dark scenes
'max_brightness': 80.0, # Avoid overexposed scenes
'min_sharpness': 50.0, # Ensure image clarity
'max_head_rotation': 45.0 # Limit extreme head poses
}
face = detection.get('Celebrity', {}).get('Face', {})
quality = face.get('Quality', {})
pose = face.get('Pose', {})
# Apply quality business rules
if face.get('Confidence', 0) < QUALITY_RULES['min_face_confidence']:
return False, "Face confidence too low"
if quality.get('Brightness', 0) < QUALITY_RULES['min_brightness']:
return False, "Scene too dark"
if quality.get('Sharpness', 0) < QUALITY_RULES['min_sharpness']:
return False, "Image not sharp enough"
# Check head pose for usability
yaw = abs(pose.get('Yaw', 0))
if yaw > QUALITY_RULES['max_head_rotation']:
return False, "Head rotation too extreme"
return True, "Quality check passed"
Segment Metadata Enrichment
def enrich_segment_metadata(segment):
"""
Add business-relevant metadata to segments
"""
enriched = {
'segment_id': generate_segment_id(),
'source_timestamp_ms': segment['startTimestampMillis'],
'duration_ms': segment['endTimestampMillis'] - segment['startTimestampMillis'],
'confidence_score': segment['confidence'],
# Business metadata
'marketing_value': calculate_marketing_value(segment),
'social_media_ready': is_social_media_compatible(segment),
'platform_restrictions': get_platform_restrictions(segment),
'target_demographics': analyze_demographics(segment),
# Technical metadata
'video_quality_score': assess_video_quality(segment),
'audio_clarity_score': assess_audio_quality(segment),
'scene_complexity': measure_scene_complexity(segment),
# Temporal context
'narrative_position': calculate_narrative_position(segment),
'emotional_intensity': measure_emotional_intensity(segment),
'pacing_score': analyze_pacing(segment)
}
return enriched
def calculate_marketing_value(segment):
"""
Business rule: Calculate marketing value based on multiple factors
"""
MARKETING_WEIGHTS = {
'celebrity_presence': 0.4,
'action_intensity': 0.3,
'emotional_impact': 0.2,
'visual_appeal': 0.1
}
score = 0
for factor, weight in MARKETING_WEIGHTS.items():
score += segment.get(factor, 0) * weight
return min(score, 1.0) # Cap at 1.0
Technical Challenges and Business Logic Gaps
Rule Engine Limitations
# Current business rule challenges
TECHNICAL_DEBT = {
'context_awareness': {
'problem': 'Rules operate on individual frames/segments',
'impact': 'Missing narrative flow and character development',
'solution': 'Implement temporal relationship modeling'
},
'cultural_adaptation': {
'problem': 'Static confidence thresholds across all content',
'impact': 'Different genres require different sensitivity',
'solution': 'Genre-specific rule configurations'
},
'edge_case_handling': {
'problem': 'Binary pass/fail rules',
'impact': 'Loss of potentially valuable borderline content',
'solution': 'Implement fuzzy logic and confidence gradients'
}
}
def implement_contextual_rules():
"""
Future enhancement: Context-aware business rules
"""
# Rule: Consider temporal relationships between segments
CONTEXT_RULES = {
'narrative_flow': {
'setup_payoff_distance': 300, # seconds
'character_arc_tracking': True,
'emotional_progression': True
},
'genre_specific': {
'action': {'violence_tolerance': 0.8, 'pace_preference': 'fast'},
'drama': {'dialogue_weight': 0.6, 'emotion_sensitivity': 0.9},
'comedy': {'timing_precision': 0.95, 'setup_tracking': True}
}
}
Production Deployment Architecture
# Enterprise integration patterns
class SizzleReelProcessor:
def __init__(self, config_manager):
self.rules_engine = BusinessRulesEngine(config_manager)
self.quality_validator = QualityValidator()
self.output_formatter = OutputFormatter()
def process_content_batch(self, content_batch):
"""
Production workflow with error handling and monitoring
"""
results = []
for content_item in content_batch:
try:
# Apply business rules validation
if not self.rules_engine.validate_input(content_item):
self.log_rejection(content_item, "Failed input validation")
continue
# Multi-service analysis with timeout handling
analysis_results = self.run_ai_pipeline(content_item)
# Apply scoring and ranking business rules
scored_segments = self.rules_engine.score_segments(analysis_results)
# Quality assurance check
validated_segments = self.quality_validator.validate_batch(scored_segments)
# Format for downstream systems
formatted_output = self.output_formatter.format_for_cms(validated_segments)
results.append(formatted_output)
except Exception as e:
self.handle_processing_error(content_item, e)
return results
# Monitoring and alerting rules
MONITORING_RULES = {
'quality_thresholds': {
'min_segments_per_video': 3,
'max_processing_time_minutes': 60,
'min_confidence_average': 0.7
},
'alert_conditions': {
'high_rejection_rate': 0.3, # 30% rejection triggers alert
'processing_timeout_rate': 0.1,
'api_error_rate': 0.05
}
}
Business Value Metrics and KPIs
def calculate_business_metrics(processing_results):
"""
Business rules for measuring system effectiveness
"""
BUSINESS_KPIS = {
'efficiency_metrics': {
'time_saved_vs_manual': calculate_time_savings(),
'cost_per_processed_minute': calculate_processing_cost(),
'throughput_videos_per_hour': measure_throughput()
},
'quality_metrics': {
'human_approval_rate': measure_editor_approval(),
'audience_engagement_lift': measure_social_engagement(),
'click_through_rate_improvement': measure_ctr_delta()
},
'operational_metrics': {
'system_uptime': measure_availability(),
'error_rate': calculate_error_percentage(),
'api_cost_efficiency': measure_cost_per_api_call()
}
}
return BUSINESS_KPIS
Lessons Learned: Technical Implementation Insights
Multi-Cloud Service Orchestration
The system's strength lies in its rule-based orchestration of multiple AI services. Key architectural decisions:
- Service Redundancy: Multiple services analyze the same content dimensions (e.g., both AWS and GCP for label detection) to improve accuracy through consensus
- Configurable Pipelines: Business rules determine which services to invoke based on content type and processing priorities
- Fallback Mechanisms: If one service fails, the system gracefully degrades using alternative providers
Performance Optimization Strategies
# Production optimization rules
OPTIMIZATION_STRATEGIES = {
'parallel_processing': {
'aws_rekognition_concurrent_jobs': 5,
'gcp_batch_processing': True,
'azure_async_processing': True
},
'cost_management': {
'cache_celebrity_results': 24*60*60, # 24 hours
'skip_redundant_analysis': True,
'use_preview_frames_for_quality_check': True
},
'quality_vs_speed_tradeoffs': {
'fast_mode_confidence_threshold': 0.6,
'detailed_mode_confidence_threshold': 0.9,
'adaptive_processing_based_on_content_priority': True
}
}
Technical Conclusion: Building Production-Ready AI Pipelines
This POC demonstrates the complexity of building enterprise-grade AI content processing systems. Key technical takeaways:
Business Rules as First-Class Citizens
The most critical architectural decision was treating business rules as configurable, versioned components rather than hard-coded logic. This enables:
- Rapid iteration on content selection criteria
- A/B testing of different rule configurations
- Genre-specific optimization without code changes
Data Quality and Validation Pipelines
Robust production systems require comprehensive validation rules at every stage:
- Input validation (file format, duration, quality)
- Processing validation (API response completeness, confidence thresholds)
- Output validation (segment duration, content appropriateness)
Monitoring and Observability
The system implements rule-based monitoring that tracks both technical metrics (API latency, error rates) and business metrics (content quality scores, human approval rates).
This technical architecture provides a foundation for scalable, maintainable AI content processing that can adapt to changing business requirements while maintaining production reliability.
Technical Details: This project uses Python with the ContentAI Platform, integrating AWS Rekognition, Azure Video Indexer, Google Cloud Video Intelligence, AWS Transcribe, and IBM MAX Audio Classifier. The system processes video through a sophisticated pipeline that analyzes visual, audio, and textual content to automatically identify and extract compelling moments for sizzle reel creation.
Industry Impact: Developed in collaboration with HBO Max Innovation team to support the Re:Boot initiative, demonstrating the potential for AI to automate short-form content creation at scale across the entertainment industry.