Implement BERT Tokenization. **Example:** Given a sentence, split it into subwords. **Constraints:** Input sentence length should be less than 512.
Test Cases
Test Case 1
Input:
"Hello World"Expected:
["Hello", "World"]Test Case 2
Input:
"This is a test sentence"Expected:
["This", "is", "a", "test", "sentence"]+ 3 hidden test cases