As you've discovered, there is a hard limit of 16Mb on array_agg (and in a lot of other places in Snowflake e.g. it's the max size for a variant column).
If it is acceptable to create multiple files then you can probably achieve this in a Stored Proc - find some combination of column values that will guarantee that the data in each partition will result in an array_agg size < 16Mb - and then loop through those partitions running a COPY INTO for each one and outputting to a different file each time.
If you have to produce a single file then I can't think of a way of achieving this in Snowflake (though someone else may be able to). If you can process the file once it is written to S3 then it would be straightforward to copy the data to a file as JSON and then edit it to add the '[' and ']' around it
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…