Apache Iceberg version
None
Query engine
None
Please describe the bug 🐞
I have a job that reads orc files and writes them to an iceberg table, the files that are getting created are around 100MB, not 512MB, which is the default value of write.target-file-size-bytes. I tried setting write.target-file-size-bytes to 512MB manually, too, but still, the files are around 100MB.
val df = spark.read.orc("s3://hdp-temp/arch/csv3_2023")
df.writeTo("db.batch_iceberg_test3")
.tableProperty("write.target-file-size-bytes", "536870912")
.createOrReplace()
