def string = "I feel good" println string.tokenize()
This code will break down the String and return a List with three elements, which are the three words in the String:
[The, quick, brown, fox]
We can explore more on the result:
def string = "I feel good" def tokens = string.tokenize() println tokens.size() tokens.each{ token -> println token }This will be the output:
3 I feel goodThis shows that tokenize has broken it down and returned a List of substrings, in order of appearance in the String.
def fakeCsvLineContent = "1,Doe,James,5000" def tokens = fakeCsvLineContent.tokenize(",") println "ID = " + tokens[0] println "Last Name = " + tokens[1] println "First Name = " + tokens[2] println "Salary = " + tokens[3]
The code will print:
ID = 1 Last Name = Doe First Name = James Salary = 5000Just take note that the parameter to tokenize method follows the behavior of StringTokenizer. The parameter is treated as representing the list of character delimiter to break the String. For example
def fakeCsvLineContent = "1,Doe,-James,5000" def tokens = fakeCsvLineContent.tokenize(",-") println tokens.size() tokens.each{ token -> println token }
It does not break the String using the exact comma and dash sequence. But it interpret it as break into components using either comma or dash. Hence the result below:
4 1 Doe James 5000