Skip to content

Conversation

@mbolaris
Copy link

Introduce a per-parse regex Matcher cache in HlsPlaylistParser to significantly reduce object allocation overhead during playlist parsing.

Problem:
Each Matcher object allocates memory in BOTH Java heap and native heap, creating two critical issues in production:

  1. Native heap exhaustion: Native allocations are substantial and not subject to normal Java GC pressure. When Matcher objects are created faster than they're garbage collected, the native heap can be exhausted even when Java heap has space available, causing OutOfMemoryError in the native allocator.

  2. GC-induced ANRs: Excessive Matcher allocation causes frequent GC cycles. This is particularly severe with MultiView feature (4 concurrent playback sessions) on lower-performance devices, where sustained GC pressure from thousands of short-lived Matcher objects causes Application Not Responding (ANR) events.

Both issues are exacerbated by frequent HLS playlist refreshes (every 2-6 seconds), creating continuous allocation pressure.

Solution:

  • Instantiate a MatcherCacheState per parse() call and pass it through private parsing helpers (no ThreadLocal, clear ownership/lifetime)
  • Employ access-ordered LinkedHashMap as LRU cache (max 32 entries)
  • Reuse Matcher objects via reset() instead of creating new instances
  • Eliminate both Java heap AND native heap allocation pressure

Performance impact:

  • Reduces Matcher allocations by >99% in production workloads
  • Eliminates native heap exhaustion risk from Matcher object churn
  • Drastically reduces GC frequency and duration, preventing ANRs
  • Typical cache occupancy: 6-12 patterns (well under 32 limit)
  • Critical for MultiView and lower-performance devices

Testing:

  • Validated over 2+ hours with production HLS streams
  • No functional changes to parsing behavior
  • All existing tests pass

Introduce a per-parse regex Matcher cache in HlsPlaylistParser to
significantly reduce object allocation overhead during playlist parsing.

Problem:
Each Matcher object allocates memory in BOTH Java heap and native heap,
creating two critical issues in production:

1. Native heap exhaustion: Native allocations are substantial and not
   subject to normal Java GC pressure. When Matcher objects are created
   faster than they're garbage collected, the native heap can be
   exhausted even when Java heap has space available, causing
   OutOfMemoryError in the native allocator.

2. GC-induced ANRs: Excessive Matcher allocation causes frequent GC
   cycles. This is particularly severe with MultiView feature
   (4 concurrent playback sessions) on lower-performance devices, where
   sustained GC pressure from thousands of short-lived Matcher objects
   causes Application Not Responding (ANR) events.

Both issues are exacerbated by frequent HLS playlist refreshes (every
2-6 seconds), creating continuous allocation pressure.

Solution:
- Instantiate a MatcherCacheState per parse() call and pass it through
  private parsing helpers (no ThreadLocal, clear ownership/lifetime)
- Employ access-ordered LinkedHashMap as LRU cache (max 32 entries)
- Reuse Matcher objects via reset() instead of creating new instances
- Eliminate both Java heap AND native heap allocation pressure

Performance impact:
- Reduces Matcher allocations by >99% in production workloads
- Eliminates native heap exhaustion risk from Matcher object churn
- Drastically reduces GC frequency and duration, preventing ANRs
- Typical cache occupancy: 6-12 patterns (well under 32 limit)
- Critical for MultiView and lower-performance devices

Testing:
- Validated over 2+ hours with production HLS streams
- No functional changes to parsing behavior
- All existing tests pass
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant