I love PB's DIM AT statement allowing me the opportunity to overlay a LONG ARRAY on a dynamic string in order to do fast scans. When building the arrays though, if you are adding a known quantity of elements at a time, it can be faster to copy the data to a resized buffer rather than concatenating each element one at a time.
It doesn't make much difference for small strings/arrays, but the performance penalty seems to me exponential/logarithmic with the size of the string/array.
#COMPILE EXE
#DIM ALL
'The difference in timing changes dramatically with changes in %MaxIterations
%MaxIterations = 90000&
FUNCTION PBMAIN()
LOCAL I AS LONG, sBuffer AS STRING, lpGetIt AS LONG PTR
LOCAL fdStart AS DOUBLE, fdEnd AS DOUBLE
sBuffer = MKL$( RND( 1, %MaxIterations))
'In a real app, sBuffer would represent data that needs to be added to our array
'Especially useful when the data has to be processed / pulled in a loop
fdStart = TIMER
FOR I = 1 TO %MaxIterations
sTemp = sTemp + LEFT$( sBuffer, 4)
NEXT
fdEnd = TIMER
MSGBOX "concatenation took " + FORMAT$( fdEnd - fdStart) + " seconds."
fdStart = TIMER
sTemp = STRING$( %MaxIterations * 4, $NUL)
lpGetIt = STRPTR( sTemp)
FOR I = 0 TO %MaxIterations-1
@lpGetIt[ I ] = CVL( sBuffer)
NEXT
fdEnd = TIMER
MSGBOX "pointer took " + FORMAT$( fdEnd - fdStart) + " seconds."
END FUNCTION
The DIM AT ... (and also the UNION and the FIELD-Datatype) is really one of those PB things which can make a big difference on Speed.
Using these tools can make the writing of fast programms much easier.
Good we have an example here.