Java 字节缓冲器性能问题
2022-09-02 22:04:50
在处理多个千兆字节文件时,我注意到一些奇怪的事情:似乎使用文件通道从文件通道读取到使用propositDirect分配的重用ByteBuffer对象比从MappedByteBuffer读取要慢得多,实际上它甚至比使用常规读取调用读取字节数组还要慢!
我本来以为它(几乎)和从mappedbytebuffers读取一样快,就像我的ByteBuffer用depositeDirect分配一样快,因此读取应该直接在我的字节缓冲器中结束,而没有任何中间副本。
我现在的问题是:我做错了什么?或者字节缓冲器+文件通道真的比常规的io/mmap慢r吗?
我在下面的示例代码中还添加了一些代码,可以将读取的内容转换为长整型值,因为这是我的实际代码不断执行的操作。我希望ByteBuffer getLong()方法比我自己的byte shuffeler快得多。
测试结果: mmap: 3.828 字节缓冲: 55.097 常规 i/o: 38.175
import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileChannel.MapMode;
import java.nio.MappedByteBuffer;
class testbb {
static final int size = 536870904, n = size / 24;
static public long byteArrayToLong(byte [] in, int offset) {
return ((((((((long)(in[offset + 0] & 0xff) << 8) | (long)(in[offset + 1] & 0xff)) << 8 | (long)(in[offset + 2] & 0xff)) << 8 | (long)(in[offset + 3] & 0xff)) << 8 | (long)(in[offset + 4] & 0xff)) << 8 | (long)(in[offset + 5] & 0xff)) << 8 | (long)(in[offset + 6] & 0xff)) << 8 | (long)(in[offset + 7] & 0xff);
}
public static void main(String [] args) throws IOException {
long start;
RandomAccessFile fileHandle;
FileChannel fileChannel;
// create file
fileHandle = new RandomAccessFile("file.dat", "rw");
byte [] buffer = new byte[24];
for(int index=0; index<n; index++)
fileHandle.write(buffer);
fileChannel = fileHandle.getChannel();
// mmap()
MappedByteBuffer mbb = fileChannel.map(FileChannel.MapMode.READ_WRITE, 0, size);
byte [] buffer1 = new byte[24];
start = System.currentTimeMillis();
for(int index=0; index<n; index++) {
mbb.position(index * 24);
mbb.get(buffer1, 0, 24);
long dummy1 = byteArrayToLong(buffer1, 0);
long dummy2 = byteArrayToLong(buffer1, 8);
long dummy3 = byteArrayToLong(buffer1, 16);
}
System.out.println("mmap: " + (System.currentTimeMillis() - start) / 1000.0);
// bytebuffer
ByteBuffer buffer2 = ByteBuffer.allocateDirect(24);
start = System.currentTimeMillis();
for(int index=0; index<n; index++) {
buffer2.rewind();
fileChannel.read(buffer2, index * 24);
buffer2.rewind(); // need to rewind it to be able to use it
long dummy1 = buffer2.getLong();
long dummy2 = buffer2.getLong();
long dummy3 = buffer2.getLong();
}
System.out.println("bytebuffer: " + (System.currentTimeMillis() - start) / 1000.0);
// regular i/o
byte [] buffer3 = new byte[24];
start = System.currentTimeMillis();
for(int index=0; index<n; index++) {
fileHandle.seek(index * 24);
fileHandle.read(buffer3);
long dummy1 = byteArrayToLong(buffer1, 0);
long dummy2 = byteArrayToLong(buffer1, 8);
long dummy3 = byteArrayToLong(buffer1, 16);
}
System.out.println("regular i/o: " + (System.currentTimeMillis() - start) / 1000.0);
}
}
由于加载大段然后处理它们不是一种选择(我将到处读取数据),我认为我应该坚持使用MappedByteBuffer。谢谢大家的建议。