DiskLruCache這個類我記憶中是來自Google的一個開源項目,叫做BitmapFun,目的是更方便的加載bitmap。項目的源碼:https://developer.android.com/training/displaying-bitmaps/index.html,這個項目中就有一個DiskLruCache類,用來做圖片的磁盤緩存。了解緩存機制的朋友應該知道緩存應該做內存和磁盤兩個,這個類提供的磁盤緩存用的是Lru(最近最少使用)算法。這個算法保證了經常使用的數據會被緩存,如果經常沒用的數據就被清理。這就和人類的大腦一樣,最近經常看過的知識就能記得很牢,如果長時間沒看了,就可能記不得了。好了,說了這么多了,開始正文。
一、源碼分析
1.1 源碼
剛上來就開始源碼分析不是我的風格,所以僅僅是貼一下,然后通過感性的認識進行總結。首先需要說明的是這個類在android的SDK中是沒有的,需要自己下載導入。

/* * Copyright (C) 2011 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package libcore.io; import java.io.BufferedWriter; import java.io.Closeable; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.FilterOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.OutputStreamWriter; import java.io.Writer; import java.nio.charset.Charsets; import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; /** * A cache that uses a bounded amount of space on a filesystem. Each cache * entry has a string key and a fixed number of values. Values are byte * sequences, accessible as streams or files. Each value must be between {@code * 0} and {@code Integer.MAX_VALUE} bytes in length. * * <p>The cache stores its data in a directory on the filesystem. This * directory must be exclusive to the cache; the cache may delete or overwrite * files from its directory. It is an error for multiple processes to use the * same cache directory at the same time. * * <p>This cache limits the number of bytes that it will store on the * filesystem. When the number of stored bytes exceeds the limit, the cache will * remove entries in the background until the limit is satisfied. The limit is * not strict: the cache may temporarily exceed it while waiting for files to be * deleted. The limit does not include filesystem overhead or the cache * journal so space-sensitive applications should set a conservative limit. * * <p>Clients call {@link #edit} to create or update the values of an entry. An * entry may have only one editor at one time; if a value is not available to be * edited then {@link #edit} will return null. * <ul> * <li>When an entry is being <strong>created</strong> it is necessary to * supply a full set of values; the empty value should be used as a * placeholder if necessary. * <li>When an entry is being <strong>edited</strong>, it is not necessary * to supply data for every value; values default to their previous * value. * </ul> * Every {@link #edit} call must be matched by a call to {@link Editor#commit} * or {@link Editor#abort}. Committing is atomic: a read observes the full set * of values as they were before or after the commit, but never a mix of values. * * <p>Clients call {@link #get} to read a snapshot of an entry. The read will * observe the value at the time that {@link #get} was called. Updates and * removals after the call do not impact ongoing reads. * * <p>This class is tolerant of some I/O errors. If files are missing from the * filesystem, the corresponding entries will be dropped from the cache. If * an error occurs while writing a cache value, the edit will fail silently. * Callers should handle other problems by catching {@code IOException} and * responding appropriately. */ public final class DiskLruCache implements Closeable { static final String JOURNAL_FILE = "journal"; static final String JOURNAL_FILE_TMP = "journal.tmp"; static final String MAGIC = "libcore.io.DiskLruCache"; static final String VERSION_1 = "1"; static final long ANY_SEQUENCE_NUMBER = -1; private static final String CLEAN = "CLEAN"; private static final String DIRTY = "DIRTY"; private static final String REMOVE = "REMOVE"; private static final String READ = "READ"; /* * This cache uses a journal file named "journal". A typical journal file * looks like this: * libcore.io.DiskLruCache * 1 * 100 * 2 * * CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 * DIRTY 335c4c6028171cfddfbaae1a9c313c52 * CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 * REMOVE 335c4c6028171cfddfbaae1a9c313c52 * DIRTY 1ab96a171faeeee38496d8b330771a7a * CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 * READ 335c4c6028171cfddfbaae1a9c313c52 * READ 3400330d1dfc7f3f7f4b8d4d803dfcf6 * * The first five lines of the journal form its header. They are the * constant string "libcore.io.DiskLruCache", the disk cache's version, * the application's version, the value count, and a blank line. * * Each of the subsequent lines in the file is a record of the state of a * cache entry. Each line contains space-separated values: a state, a key, * and optional state-specific values. * o DIRTY lines track that an entry is actively being created or updated. * Every successful DIRTY action should be followed by a CLEAN or REMOVE * action. DIRTY lines without a matching CLEAN or REMOVE indicate that * temporary files may need to be deleted. * o CLEAN lines track a cache entry that has been successfully published * and may be read. A publish line is followed by the lengths of each of * its values. * o READ lines track accesses for LRU. * o REMOVE lines track entries that have been deleted. * * The journal file is appended to as cache operations occur. The journal may * occasionally be compacted by dropping redundant lines. A temporary file named * "journal.tmp" will be used during compaction; that file should be deleted if * it exists when the cache is opened. */ private final File directory; private final File journalFile; private final File journalFileTmp; private final int appVersion; private final long maxSize; private final int valueCount; private long size = 0; private Writer journalWriter; private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0, 0.75f, true); private int redundantOpCount; /** * To differentiate between old and current snapshots, each entry is given * a sequence number each time an edit is committed. A snapshot is stale if * its sequence number is not equal to its entry's sequence number. */ private long nextSequenceNumber = 0; /** This cache uses a single background thread to evict entries. */ private final ExecutorService executorService = new ThreadPoolExecutor(0, 1, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); private final Callable<Void> cleanupCallable = new Callable<Void>() { @Override public Void call() throws Exception { synchronized (DiskLruCache.this) { if (journalWriter == null) { return null; // closed } trimToSize(); if (journalRebuildRequired()) { rebuildJournal(); redundantOpCount = 0; } } return null; } }; private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) { this.directory = directory; this.appVersion = appVersion; this.journalFile = new File(directory, JOURNAL_FILE); this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP); this.valueCount = valueCount; this.maxSize = maxSize; } /** * Opens the cache in {@code directory}, creating a cache if none exists * there. * * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * @param maxSize the maximum number of bytes this cache should use to store * @throws IOException if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } if (valueCount <= 0) { throw new IllegalArgumentException("valueCount <= 0"); } // prefer to pick up where we left off DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); if (cache.journalFile.exists()) { try { cache.readJournal(); cache.processJournal(); cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true)); return cache; } catch (IOException journalIsCorrupt) { System.logW("DiskLruCache " + directory + " is corrupt: " + journalIsCorrupt.getMessage() + ", removing"); cache.delete(); } } // create a new empty cache directory.mkdirs(); cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); cache.rebuildJournal(); return cache; } private void readJournal() throws IOException { StrictLineReader reader = new StrictLineReader(new FileInputStream(journalFile), Charsets.US_ASCII); try { String magic = reader.readLine(); String version = reader.readLine(); String appVersionString = reader.readLine(); String valueCountString = reader.readLine(); String blank = reader.readLine(); if (!MAGIC.equals(magic) || !VERSION_1.equals(version) || !Integer.toString(appVersion).equals(appVersionString) || !Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) { throw new IOException("unexpected journal header: [" + magic + ", " + version + ", " + valueCountString + ", " + blank + "]"); } int lineCount = 0; while (true) { try { readJournalLine(reader.readLine()); lineCount++; } catch (EOFException endOfJournal) { break; } } redundantOpCount = lineCount - lruEntries.size(); } finally { IoUtils.closeQuietly(reader); } } private void readJournalLine(String line) throws IOException { int firstSpace = line.indexOf(' '); if (firstSpace == -1) { throw new IOException("unexpected journal line: " + line); } int keyBegin = firstSpace + 1; int secondSpace = line.indexOf(' ', keyBegin); final String key; if (secondSpace == -1) { key = line.substring(keyBegin); if (firstSpace == REMOVE.length() && line.startsWith(REMOVE)) { lruEntries.remove(key); return; } } else { key = line.substring(keyBegin, secondSpace); } Entry entry = lruEntries.get(key); if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } if (secondSpace != -1 && firstSpace == CLEAN.length() && line.startsWith(CLEAN)) { String[] parts = line.substring(secondSpace + 1).split(" "); entry.readable = true; entry.currentEditor = null; entry.setLengths(parts); } else if (secondSpace == -1 && firstSpace == DIRTY.length() && line.startsWith(DIRTY)) { entry.currentEditor = new Editor(entry); } else if (secondSpace == -1 && firstSpace == READ.length() && line.startsWith(READ)) { // this work was already done by calling lruEntries.get() } else { throw new IOException("unexpected journal line: " + line); } } /** * Computes the initial size and collects garbage as a part of opening the * cache. Dirty entries are assumed to be inconsistent and will be deleted. */ private void processJournal() throws IOException { deleteIfExists(journalFileTmp); for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) { Entry entry = i.next(); if (entry.currentEditor == null) { for (int t = 0; t < valueCount; t++) { size += entry.lengths[t]; } } else { entry.currentEditor = null; for (int t = 0; t < valueCount; t++) { deleteIfExists(entry.getCleanFile(t)); deleteIfExists(entry.getDirtyFile(t)); } i.remove(); } } } /** * Creates a new journal that omits redundant information. This replaces the * current journal if it exists. */ private synchronized void rebuildJournal() throws IOException { if (journalWriter != null) { journalWriter.close(); } Writer writer = new BufferedWriter(new FileWriter(journalFileTmp)); writer.write(MAGIC); writer.write("\n"); writer.write(VERSION_1); writer.write("\n"); writer.write(Integer.toString(appVersion)); writer.write("\n"); writer.write(Integer.toString(valueCount)); writer.write("\n"); writer.write("\n"); for (Entry entry : lruEntries.values()) { if (entry.currentEditor != null) { writer.write(DIRTY + ' ' + entry.key + '\n'); } else { writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); } } writer.close(); journalFileTmp.renameTo(journalFile); journalWriter = new BufferedWriter(new FileWriter(journalFile, true)); } private static void deleteIfExists(File file) throws IOException { try { Libcore.os.remove(file.getPath()); } catch (ErrnoException errnoException) { if (errnoException.errno != OsConstants.ENOENT) { throw errnoException.rethrowAsIOException(); } } } /** * Returns a snapshot of the entry named {@code key}, or null if it doesn't * exist is not currently readable. If a value is returned, it is moved to * the head of the LRU queue. */ public synchronized Snapshot get(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null) { return null; } if (!entry.readable) { return null; } /* * Open all streams eagerly to guarantee that we see a single published * snapshot. If we opened streams lazily then the streams could come * from different edits. */ InputStream[] ins = new InputStream[valueCount]; try { for (int i = 0; i < valueCount; i++) { ins[i] = new FileInputStream(entry.getCleanFile(i)); } } catch (FileNotFoundException e) { // a file must have been deleted manually! return null; } redundantOpCount++; journalWriter.append(READ + ' ' + key + '\n'); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return new Snapshot(key, entry.sequenceNumber, ins); } /** * Returns an editor for the entry named {@code key}, or null if another * edit is in progress. */ public Editor edit(String key) throws IOException { return edit(key, ANY_SEQUENCE_NUMBER); } private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) { return null; // snapshot is stale } if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } else if (entry.currentEditor != null) { return null; // another edit is in progress } Editor editor = new Editor(entry); entry.currentEditor = editor; // flush the journal before creating files to prevent file leaks journalWriter.write(DIRTY + ' ' + key + '\n'); journalWriter.flush(); return editor; } /** * Returns the directory where this cache stores its data. */ public File getDirectory() { return directory; } /** * Returns the maximum number of bytes that this cache should use to store * its data. */ public long maxSize() { return maxSize; } /** * Returns the number of bytes currently being used to store the values in * this cache. This may be greater than the max size if a background * deletion is pending. */ public synchronized long size() { return size; } private synchronized void completeEdit(Editor editor, boolean success) throws IOException { Entry entry = editor.entry; if (entry.currentEditor != editor) { throw new IllegalStateException(); } // if this edit is creating the entry for the first time, every index must have a value if (success && !entry.readable) { for (int i = 0; i < valueCount; i++) { if (!editor.written[i]) { editor.abort(); throw new IllegalStateException("Newly created entry didn't create value for index " + i); } if (!entry.getDirtyFile(i).exists()) { editor.abort(); System.logW("DiskLruCache: Newly created entry doesn't have file for index " + i); return; } } } for (int i = 0; i < valueCount; i++) { File dirty = entry.getDirtyFile(i); if (success) { if (dirty.exists()) { File clean = entry.getCleanFile(i); dirty.renameTo(clean); long oldLength = entry.lengths[i]; long newLength = clean.length(); entry.lengths[i] = newLength; size = size - oldLength + newLength; } } else { deleteIfExists(dirty); } } redundantOpCount++; entry.currentEditor = null; if (entry.readable | success) { entry.readable = true; journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); if (success) { entry.sequenceNumber = nextSequenceNumber++; } } else { lruEntries.remove(entry.key); journalWriter.write(REMOVE + ' ' + entry.key + '\n'); } if (size > maxSize || journalRebuildRequired()) { executorService.submit(cleanupCallable); } } /** * We only rebuild the journal when it will halve the size of the journal * and eliminate at least 2000 ops. */ private boolean journalRebuildRequired() { final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000; return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD && redundantOpCount >= lruEntries.size(); } /** * Drops the entry for {@code key} if it exists and can be removed. Entries * actively being edited cannot be removed. * * @return true if an entry was removed. */ public synchronized boolean remove(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null || entry.currentEditor != null) { return false; } for (int i = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); if (!file.delete()) { throw new IOException("failed to delete " + file); } size -= entry.lengths[i]; entry.lengths[i] = 0; } redundantOpCount++; journalWriter.append(REMOVE + ' ' + key + '\n'); lruEntries.remove(key); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return true; } /** * Returns true if this cache has been closed. */ public boolean isClosed() { return journalWriter == null; } private void checkNotClosed() { if (journalWriter == null) { throw new IllegalStateException("cache is closed"); } } /** * Force buffered operations to the filesystem. */ public synchronized void flush() throws IOException { checkNotClosed(); trimToSize(); journalWriter.flush(); } /** * Closes this cache. Stored values will remain on the filesystem. */ public synchronized void close() throws IOException { if (journalWriter == null) { return; // already closed } for (Entry entry : new ArrayList<Entry>(lruEntries.values())) { if (entry.currentEditor != null) { entry.currentEditor.abort(); } } trimToSize(); journalWriter.close(); journalWriter = null; } private void trimToSize() throws IOException { while (size > maxSize) { Map.Entry<String, Entry> toEvict = lruEntries.eldest(); remove(toEvict.getKey()); } } /** * Closes the cache and deletes all of its stored values. This will delete * all files in the cache directory including files that weren't created by * the cache. */ public void delete() throws IOException { close(); IoUtils.deleteContents(directory); } private void validateKey(String key) { if (key.contains(" ") || key.contains("\n") || key.contains("\r")) { throw new IllegalArgumentException( "keys must not contain spaces or newlines: \"" + key + "\""); } } private static String inputStreamToString(InputStream in) throws IOException { return Streams.readFully(new InputStreamReader(in, Charsets.UTF_8)); } /** * A snapshot of the values for an entry. */ public final class Snapshot implements Closeable { private final String key; private final long sequenceNumber; private final InputStream[] ins; private Snapshot(String key, long sequenceNumber, InputStream[] ins) { this.key = key; this.sequenceNumber = sequenceNumber; this.ins = ins; } /** * Returns an editor for this snapshot's entry, or null if either the * entry has changed since this snapshot was created or if another edit * is in progress. */ public Editor edit() throws IOException { return DiskLruCache.this.edit(key, sequenceNumber); } /** * Returns the unbuffered stream with the value for {@code index}. */ public InputStream getInputStream(int index) { return ins[index]; } /** * Returns the string value for {@code index}. */ public String getString(int index) throws IOException { return inputStreamToString(getInputStream(index)); } @Override public void close() { for (InputStream in : ins) { IoUtils.closeQuietly(in); } } } /** * Edits the values for an entry. */ public final class Editor { private final Entry entry; private final boolean[] written; private boolean hasErrors; private Editor(Entry entry) { this.entry = entry; this.written = (entry.readable) ? null : new boolean[valueCount]; } /** * Returns an unbuffered input stream to read the last committed value, * or null if no value has been committed. */ public InputStream newInputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } if (!entry.readable) { return null; } return new FileInputStream(entry.getCleanFile(index)); } } /** * Returns the last committed value as a string, or null if no value * has been committed. */ public String getString(int index) throws IOException { InputStream in = newInputStream(index); return in != null ? inputStreamToString(in) : null; } /** * Returns a new unbuffered output stream to write the value at * {@code index}. If the underlying output stream encounters errors * when writing to the filesystem, this edit will be aborted when * {@link #commit} is called. The returned output stream does not throw * IOExceptions. */ public OutputStream newOutputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } if (!entry.readable) { written[index] = true; } return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index))); } } /** * Sets the value at {@code index} to {@code value}. */ public void set(int index, String value) throws IOException { Writer writer = null; try { writer = new OutputStreamWriter(newOutputStream(index), Charsets.UTF_8); writer.write(value); } finally { IoUtils.closeQuietly(writer); } } /** * Commits this edit so it is visible to readers. This releases the * edit lock so another edit may be started on the same key. */ public void commit() throws IOException { if (hasErrors) { completeEdit(this, false); remove(entry.key); // the previous entry is stale } else { completeEdit(this, true); } } /** * Aborts this edit. This releases the edit lock so another edit may be * started on the same key. */ public void abort() throws IOException { completeEdit(this, false); } private class FaultHidingOutputStream extends FilterOutputStream { private FaultHidingOutputStream(OutputStream out) { super(out); } @Override public void write(int oneByte) { try { out.write(oneByte); } catch (IOException e) { hasErrors = true; } } @Override public void write(byte[] buffer, int offset, int length) { try { out.write(buffer, offset, length); } catch (IOException e) { hasErrors = true; } } @Override public void close() { try { out.close(); } catch (IOException e) { hasErrors = true; } } @Override public void flush() { try { out.flush(); } catch (IOException e) { hasErrors = true; } } } } private final class Entry { private final String key; /** Lengths of this entry's files. */ private final long[] lengths; /** True if this entry has ever been published */ private boolean readable; /** The ongoing edit or null if this entry is not being edited. */ private Editor currentEditor; /** The sequence number of the most recently committed edit to this entry. */ private long sequenceNumber; private Entry(String key) { this.key = key; this.lengths = new long[valueCount]; } public String getLengths() throws IOException { StringBuilder result = new StringBuilder(); for (long size : lengths) { result.append(' ').append(size); } return result.toString(); } /** * Set lengths using decimal numbers like "10123". */ private void setLengths(String[] strings) throws IOException { if (strings.length != valueCount) { throw invalidLengths(strings); } try { for (int i = 0; i < strings.length; i++) { lengths[i] = Long.parseLong(strings[i]); } } catch (NumberFormatException e) { throw invalidLengths(strings); } } private IOException invalidLengths(String[] strings) throws IOException { throw new IOException("unexpected journal line: " + Arrays.toString(strings)); } public File getCleanFile(int i) { return new File(directory, key + "." + i); } public File getDirtyFile(int i) { return new File(directory, key + "." + i + ".tmp"); } } }
BitmapFun新版本的源碼:

/* * Copyright (C) 2011 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.example.android.displayingbitmaps.util; import java.io.BufferedInputStream; import java.io.BufferedWriter; import java.io.Closeable; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.FilterOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.OutputStreamWriter; import java.io.Reader; import java.io.StringWriter; import java.io.Writer; import java.lang.reflect.Array; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; /** ****************************************************************************** * Taken from the JB source code, can be found in: * libcore/luni/src/main/java/libcore/io/DiskLruCache.java * or direct link: * https://android.googlesource.com/platform/libcore/+/android-4.1.1_r1/luni/src/main/java/libcore/io/DiskLruCache.java ****************************************************************************** * * A cache that uses a bounded amount of space on a filesystem. Each cache * entry has a string key and a fixed number of values. Values are byte * sequences, accessible as streams or files. Each value must be between {@code * 0} and {@code Integer.MAX_VALUE} bytes in length. * * <p>The cache stores its data in a directory on the filesystem. This * directory must be exclusive to the cache; the cache may delete or overwrite * files from its directory. It is an error for multiple processes to use the * same cache directory at the same time. * * <p>This cache limits the number of bytes that it will store on the * filesystem. When the number of stored bytes exceeds the limit, the cache will * remove entries in the background until the limit is satisfied. The limit is * not strict: the cache may temporarily exceed it while waiting for files to be * deleted. The limit does not include filesystem overhead or the cache * journal so space-sensitive applications should set a conservative limit. * * <p>Clients call {@link #edit} to create or update the values of an entry. An * entry may have only one editor at one time; if a value is not available to be * edited then {@link #edit} will return null. * <ul> * <li>When an entry is being <strong>created</strong> it is necessary to * supply a full set of values; the empty value should be used as a * placeholder if necessary. * <li>When an entry is being <strong>edited</strong>, it is not necessary * to supply data for every value; values default to their previous * value. * </ul> * Every {@link #edit} call must be matched by a call to {@link Editor#commit} * or {@link Editor#abort}. Committing is atomic: a read observes the full set * of values as they were before or after the commit, but never a mix of values. * * <p>Clients call {@link #get} to read a snapshot of an entry. The read will * observe the value at the time that {@link #get} was called. Updates and * removals after the call do not impact ongoing reads. * * <p>This class is tolerant of some I/O errors. If files are missing from the * filesystem, the corresponding entries will be dropped from the cache. If * an error occurs while writing a cache value, the edit will fail silently. * Callers should handle other problems by catching {@code IOException} and * responding appropriately. */ public final class DiskLruCache implements Closeable { static final String JOURNAL_FILE = "journal"; static final String JOURNAL_FILE_TMP = "journal.tmp"; static final String MAGIC = "libcore.io.DiskLruCache"; static final String VERSION_1 = "1"; static final long ANY_SEQUENCE_NUMBER = -1; private static final String CLEAN = "CLEAN"; private static final String DIRTY = "DIRTY"; private static final String REMOVE = "REMOVE"; private static final String READ = "READ"; private static final Charset UTF_8 = Charset.forName("UTF-8"); private static final int IO_BUFFER_SIZE = 8 * 1024; /* * This cache uses a journal file named "journal". A typical journal file * looks like this: * libcore.io.DiskLruCache * 1 * 100 * 2 * * CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 * DIRTY 335c4c6028171cfddfbaae1a9c313c52 * CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 * REMOVE 335c4c6028171cfddfbaae1a9c313c52 * DIRTY 1ab96a171faeeee38496d8b330771a7a * CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 * READ 335c4c6028171cfddfbaae1a9c313c52 * READ 3400330d1dfc7f3f7f4b8d4d803dfcf6 * * The first five lines of the journal form its header. They are the * constant string "libcore.io.DiskLruCache", the disk cache's version, * the application's version, the value count, and a blank line. * * Each of the subsequent lines in the file is a record of the state of a * cache entry. Each line contains space-separated values: a state, a key, * and optional state-specific values. * o DIRTY lines track that an entry is actively being created or updated. * Every successful DIRTY action should be followed by a CLEAN or REMOVE * action. DIRTY lines without a matching CLEAN or REMOVE indicate that * temporary files may need to be deleted. * o CLEAN lines track a cache entry that has been successfully published * and may be read. A publish line is followed by the lengths of each of * its values. * o READ lines track accesses for LRU. * o REMOVE lines track entries that have been deleted. * * The journal file is appended to as cache operations occur. The journal may * occasionally be compacted by dropping redundant lines. A temporary file named * "journal.tmp" will be used during compaction; that file should be deleted if * it exists when the cache is opened. */ private final File directory; private final File journalFile; private final File journalFileTmp; private final int appVersion; private final long maxSize; private final int valueCount; private long size = 0; private Writer journalWriter; private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0, 0.75f, true); private int redundantOpCount; /** * To differentiate between old and current snapshots, each entry is given * a sequence number each time an edit is committed. A snapshot is stale if * its sequence number is not equal to its entry's sequence number. */ private long nextSequenceNumber = 0; /* From java.util.Arrays */ @SuppressWarnings("unchecked") private static <T> T[] copyOfRange(T[] original, int start, int end) { final int originalLength = original.length; // For exception priority compatibility. if (start > end) { throw new IllegalArgumentException(); } if (start < 0 || start > originalLength) { throw new ArrayIndexOutOfBoundsException(); } final int resultLength = end - start; final int copyLength = Math.min(resultLength, originalLength - start); final T[] result = (T[]) Array .newInstance(original.getClass().getComponentType(), resultLength); System.arraycopy(original, start, result, 0, copyLength); return result; } /** * Returns the remainder of 'reader' as a string, closing it when done. */ public static String readFully(Reader reader) throws IOException { try { StringWriter writer = new StringWriter(); char[] buffer = new char[1024]; int count; while ((count = reader.read(buffer)) != -1) { writer.write(buffer, 0, count); } return writer.toString(); } finally { reader.close(); } } /** * Returns the ASCII characters up to but not including the next "\r\n", or * "\n". * * @throws java.io.EOFException if the stream is exhausted before the next newline * character. */ public static String readAsciiLine(InputStream in) throws IOException { // TODO: support UTF-8 here instead StringBuilder result = new StringBuilder(80); while (true) { int c = in.read(); if (c == -1) { throw new EOFException(); } else if (c == '\n') { break; } result.append((char) c); } int length = result.length(); if (length > 0 && result.charAt(length - 1) == '\r') { result.setLength(length - 1); } return result.toString(); } /** * Closes 'closeable', ignoring any checked exceptions. Does nothing if 'closeable' is null. */ public static void closeQuietly(Closeable closeable) { if (closeable != null) { try { closeable.close(); } catch (RuntimeException rethrown) { throw rethrown; } catch (Exception ignored) { } } } /** * Recursively delete everything in {@code dir}. */ // TODO: this should specify paths as Strings rather than as Files public static void deleteContents(File dir) throws IOException { File[] files = dir.listFiles(); if (files == null) { throw new IllegalArgumentException("not a directory: " + dir); } for (File file : files) { if (file.isDirectory()) { deleteContents(file); } if (!file.delete()) { throw new IOException("failed to delete file: " + file); } } } /** This cache uses a single background thread to evict entries. */ private final ExecutorService executorService = new ThreadPoolExecutor(0, 1, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); private final Callable<Void> cleanupCallable = new Callable<Void>() { @Override public Void call() throws Exception { synchronized (DiskLruCache.this) { if (journalWriter == null) { return null; // closed } trimToSize(); if (journalRebuildRequired()) { rebuildJournal(); redundantOpCount = 0; } } return null; } }; private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) { this.directory = directory; this.appVersion = appVersion; this.journalFile = new File(directory, JOURNAL_FILE); this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP); this.valueCount = valueCount; this.maxSize = maxSize; } /** * Opens the cache in {@code directory}, creating a cache if none exists * there. * * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * @param maxSize the maximum number of bytes this cache should use to store * @throws java.io.IOException if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } if (valueCount <= 0) { throw new IllegalArgumentException("valueCount <= 0"); } // prefer to pick up where we left off DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); if (cache.journalFile.exists()) { try { cache.readJournal(); cache.processJournal(); cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true), IO_BUFFER_SIZE); return cache; } catch (IOException journalIsCorrupt) { // System.logW("DiskLruCache " + directory + " is corrupt: " // + journalIsCorrupt.getMessage() + ", removing"); cache.delete(); } } // create a new empty cache directory.mkdirs(); cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); cache.rebuildJournal(); return cache; } private void readJournal() throws IOException { InputStream in = new BufferedInputStream(new FileInputStream(journalFile), IO_BUFFER_SIZE); try { String magic = readAsciiLine(in); String version = readAsciiLine(in); String appVersionString = readAsciiLine(in); String valueCountString = readAsciiLine(in); String blank = readAsciiLine(in); if (!MAGIC.equals(magic) || !VERSION_1.equals(version) || !Integer.toString(appVersion).equals(appVersionString) || !Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) { throw new IOException("unexpected journal header: [" + magic + ", " + version + ", " + valueCountString + ", " + blank + "]"); } while (true) { try { readJournalLine(readAsciiLine(in)); } catch (EOFException endOfJournal) { break; } } } finally { closeQuietly(in); } } private void readJournalLine(String line) throws IOException { String[] parts = line.split(" "); if (parts.length < 2) { throw new IOException("unexpected journal line: " + line); } String key = parts[1]; if (parts[0].equals(REMOVE) && parts.length == 2) { lruEntries.remove(key); return; } Entry entry = lruEntries.get(key); if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) { entry.readable = true; entry.currentEditor = null; entry.setLengths(copyOfRange(parts, 2, parts.length)); } else if (parts[0].equals(DIRTY) && parts.length == 2) { entry.currentEditor = new Editor(entry); } else if (parts[0].equals(READ) && parts.length == 2) { // this work was already done by calling lruEntries.get() } else { throw new IOException("unexpected journal line: " + line); } } /** * Computes the initial size and collects garbage as a part of opening the * cache. Dirty entries are assumed to be inconsistent and will be deleted. */ private void processJournal() throws IOException { deleteIfExists(journalFileTmp); for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) { Entry entry = i.next(); if (entry.currentEditor == null) { for (int t = 0; t < valueCount; t++) { size += entry.lengths[t]; } } else { entry.currentEditor = null; for (int t = 0; t < valueCount; t++) { deleteIfExists(entry.getCleanFile(t)); deleteIfExists(entry.getDirtyFile(t)); } i.remove(); } } } /** * Creates a new journal that omits redundant information. This replaces the * current journal if it exists. */ private synchronized void rebuildJournal() throws IOException { if (journalWriter != null) { journalWriter.close(); } Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE); writer.write(MAGIC); writer.write("\n"); writer.write(VERSION_1); writer.write("\n"); writer.write(Integer.toString(appVersion)); writer.write("\n"); writer.write(Integer.toString(valueCount)); writer.write("\n"); writer.write("\n"); for (Entry entry : lruEntries.values()) { if (entry.currentEditor != null) { writer.write(DIRTY + ' ' + entry.key + '\n'); } else { writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); } } writer.close(); journalFileTmp.renameTo(journalFile); journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE); } private static void deleteIfExists(File file) throws IOException { // try { // Libcore.os.remove(file.getPath()); // } catch (ErrnoException errnoException) { // if (errnoException.errno != OsConstants.ENOENT) { // throw errnoException.rethrowAsIOException(); // } // } if (file.exists() && !file.delete()) { throw new IOException(); } } /** * Returns a snapshot of the entry named {@code key}, or null if it doesn't * exist is not currently readable. If a value is returned, it is moved to * the head of the LRU queue. */ public synchronized Snapshot get(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null) { return null; } if (!entry.readable) { return null; } /* * Open all streams eagerly to guarantee that we see a single published * snapshot. If we opened streams lazily then the streams could come * from different edits. */ InputStream[] ins = new InputStream[valueCount]; try { for (int i = 0; i < valueCount; i++) { ins[i] = new FileInputStream(entry.getCleanFile(i)); } } catch (FileNotFoundException e) { // a file must have been deleted manually! return null; } redundantOpCount++; journalWriter.append(READ + ' ' + key + '\n'); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return new Snapshot(key, entry.sequenceNumber, ins); } /** * Returns an editor for the entry named {@code key}, or null if another * edit is in progress. */ public Editor edit(String key) throws IOException { return edit(key, ANY_SEQUENCE_NUMBER); } private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) { return null; // snapshot is stale } if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } else if (entry.currentEditor != null) { return null; // another edit is in progress } Editor editor = new Editor(entry); entry.currentEditor = editor; // flush the journal before creating files to prevent file leaks journalWriter.write(DIRTY + ' ' + key + '\n'); journalWriter.flush(); return editor; } /** * Returns the directory where this cache stores its data. */ public File getDirectory() { return directory; } /** * Returns the maximum number of bytes that this cache should use to store * its data. */ public long maxSize() { return maxSize; } /** * Returns the number of bytes currently being used to store the values in * this cache. This may be greater than the max size if a background * deletion is pending. */ public synchronized long size() { return size; } private synchronized void completeEdit(Editor editor, boolean success) throws IOException { Entry entry = editor.entry; if (entry.currentEditor != editor) { throw new IllegalStateException(); } // if this edit is creating the entry for the first time, every index must have a value if (success && !entry.readable) { for (int i = 0; i < valueCount; i++) { if (!entry.getDirtyFile(i).exists()) { editor.abort(); throw new IllegalStateException("edit didn't create file " + i); } } } for (int i = 0; i < valueCount; i++) { File dirty = entry.getDirtyFile(i); if (success) { if (dirty.exists()) { File clean = entry.getCleanFile(i); dirty.renameTo(clean); long oldLength = entry.lengths[i]; long newLength = clean.length(); entry.lengths[i] = newLength; size = size - oldLength + newLength; } } else { deleteIfExists(dirty); } } redundantOpCount++; entry.currentEditor = null; if (entry.readable | success) { entry.readable = true; journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); if (success) { entry.sequenceNumber = nextSequenceNumber++; } } else { lruEntries.remove(entry.key); journalWriter.write(REMOVE + ' ' + entry.key + '\n'); } if (size > maxSize || journalRebuildRequired()) { executorService.submit(cleanupCallable); } } /** * We only rebuild the journal when it will halve the size of the journal * and eliminate at least 2000 ops. */ private boolean journalRebuildRequired() { final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000; return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD && redundantOpCount >= lruEntries.size(); } /** * Drops the entry for {@code key} if it exists and can be removed. Entries * actively being edited cannot be removed. * * @return true if an entry was removed. */ public synchronized boolean remove(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null || entry.currentEditor != null) { return false; } for (int i = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); if (!file.delete()) { throw new IOException("failed to delete " + file); } size -= entry.lengths[i]; entry.lengths[i] = 0; } redundantOpCount++; journalWriter.append(REMOVE + ' ' + key + '\n'); lruEntries.remove(key); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return true; } /** * Returns true if this cache has been closed. */ public boolean isClosed() { return journalWriter == null; } private void checkNotClosed() { if (journalWriter == null) { throw new IllegalStateException("cache is closed"); } } /** * Force buffered operations to the filesystem. */ public synchronized void flush() throws IOException { checkNotClosed(); trimToSize(); journalWriter.flush(); } /** * Closes this cache. Stored values will remain on the filesystem. */ public synchronized void close() throws IOException { if (journalWriter == null) { return; // already closed } for (Entry entry : new ArrayList<Entry>(lruEntries.values())) { if (entry.currentEditor != null) { entry.currentEditor.abort(); } } trimToSize(); journalWriter.close(); journalWriter = null; } private void trimToSize() throws IOException { while (size > maxSize) { // Map.Entry<String, Entry> toEvict = lruEntries.eldest(); final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next(); remove(toEvict.getKey()); } } /** * Closes the cache and deletes all of its stored values. This will delete * all files in the cache directory including files that weren't created by * the cache. */ public void delete() throws IOException { close(); deleteContents(directory); } private void validateKey(String key) { if (key.contains(" ") || key.contains("\n") || key.contains("\r")) { throw new IllegalArgumentException( "keys must not contain spaces or newlines: \"" + key + "\""); } } private static String inputStreamToString(InputStream in) throws IOException { return readFully(new InputStreamReader(in, UTF_8)); } /** * A snapshot of the values for an entry. */ public final class Snapshot implements Closeable { private final String key; private final long sequenceNumber; private final InputStream[] ins; private Snapshot(String key, long sequenceNumber, InputStream[] ins) { this.key = key; this.sequenceNumber = sequenceNumber; this.ins = ins; } /** * Returns an editor for this snapshot's entry, or null if either the * entry has changed since this snapshot was created or if another edit * is in progress. */ public Editor edit() throws IOException { return DiskLruCache.this.edit(key, sequenceNumber); } /** * Returns the unbuffered stream with the value for {@code index}. */ public InputStream getInputStream(int index) { return ins[index]; } /** * Returns the string value for {@code index}. */ public String getString(int index) throws IOException { return inputStreamToString(getInputStream(index)); } @Override public void close() { for (InputStream in : ins) { closeQuietly(in); } } } /** * Edits the values for an entry. */ public final class Editor { private final Entry entry; private boolean hasErrors; private Editor(Entry entry) { this.entry = entry; } /** * Returns an unbuffered input stream to read the last committed value, * or null if no value has been committed. */ public InputStream newInputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } if (!entry.readable) { return null; } return new FileInputStream(entry.getCleanFile(index)); } } /** * Returns the last committed value as a string, or null if no value * has been committed. */ public String getString(int index) throws IOException { InputStream in = newInputStream(index); return in != null ? inputStreamToString(in) : null; } /** * Returns a new unbuffered output stream to write the value at * {@code index}. If the underlying output stream encounters errors * when writing to the filesystem, this edit will be aborted when * {@link #commit} is called. The returned output stream does not throw * IOExceptions. */ public OutputStream newOutputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index))); } } /** * Sets the value at {@code index} to {@code value}. */ public void set(int index, String value) throws IOException { Writer writer = null; try { writer = new OutputStreamWriter(newOutputStream(index), UTF_8); writer.write(value); } finally { closeQuietly(writer); } } /** * Commits this edit so it is visible to readers. This releases the * edit lock so another edit may be started on the same key. */ public void commit() throws IOException { if (hasErrors) { completeEdit(this, false); remove(entry.key); // the previous entry is stale } else { completeEdit(this, true); } } /** * Aborts this edit. This releases the edit lock so another edit may be * started on the same key. */ public void abort() throws IOException { completeEdit(this, false); } private class FaultHidingOutputStream extends FilterOutputStream { private FaultHidingOutputStream(OutputStream out) { super(out); } @Override public void write(int oneByte) { try { out.write(oneByte); } catch (IOException e) { hasErrors = true; } } @Override public void write(byte[] buffer, int offset, int length) { try { out.write(buffer, offset, length); } catch (IOException e) { hasErrors = true; } } @Override public void close() { try { out.close(); } catch (IOException e) { hasErrors = true; } } @Override public void flush() { try { out.flush(); } catch (IOException e) { hasErrors = true; } } } } private final class Entry { private final String key; /** Lengths of this entry's files. */ private final long[] lengths; /** True if this entry has ever been published */ private boolean readable; /** The ongoing edit or null if this entry is not being edited. */ private Editor currentEditor; /** The sequence number of the most recently committed edit to this entry. */ private long sequenceNumber; private Entry(String key) { this.key = key; this.lengths = new long[valueCount]; } public String getLengths() throws IOException { StringBuilder result = new StringBuilder(); for (long size : lengths) { result.append(' ').append(size); } return result.toString(); } /** * Set lengths using decimal numbers like "10123". */ private void setLengths(String[] strings) throws IOException { if (strings.length != valueCount) { throw invalidLengths(strings); } try { for (int i = 0; i < strings.length; i++) { lengths[i] = Long.parseLong(strings[i]); } } catch (NumberFormatException e) { throw invalidLengths(strings); } } private IOException invalidLengths(String[] strings) throws IOException { throw new IOException("unexpected journal line: " + Arrays.toString(strings)); } public File getCleanFile(int i) { return new File(directory, key + "." + i); } public File getDirtyFile(int i) { return new File(directory, key + "." + i + ".tmp"); } } }
BitmapFun老版本的源碼:http://files.cnblogs.com/xiaoQLu/bitmapfun_old.rar

/* * Copyright (C) 2012 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.example.android.bitmapfun.util; import android.content.Context; import android.graphics.Bitmap; import android.graphics.Bitmap.CompressFormat; import android.graphics.BitmapFactory; import android.os.Environment; import android.util.Log; import com.example.android.bitmapfun.BuildConfig; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FilenameFilter; import java.io.IOException; import java.io.OutputStream; import java.io.UnsupportedEncodingException; import java.net.URLEncoder; import java.util.Collections; import java.util.LinkedHashMap; import java.util.Map; import java.util.Map.Entry; /** * A simple disk LRU bitmap cache to illustrate how a disk cache would be used for bitmap caching. A * much more robust and efficient disk LRU cache solution can be found in the ICS source code * (libcore/luni/src/main/java/libcore/io/DiskLruCache.java) and is preferable to this simple * implementation. */ public class DiskLruCache { private static final String TAG = "DiskLruCache"; private static final String CACHE_FILENAME_PREFIX = "cache_"; private static final int MAX_REMOVALS = 4; private static final int INITIAL_CAPACITY = 32; private static final float LOAD_FACTOR = 0.75f; private final File mCacheDir; private int cacheSize = 0; private int cacheByteSize = 0; private final int maxCacheItemSize = 64; // 64 item default private long maxCacheByteSize = 1024 * 1024 * 5; // 5MB default private CompressFormat mCompressFormat = CompressFormat.JPEG; private int mCompressQuality = 70; private final Map<String, String> mLinkedHashMap = Collections.synchronizedMap(new LinkedHashMap<String, String>( INITIAL_CAPACITY, LOAD_FACTOR, true)); /** * A filename filter to use to identify the cache filenames which have CACHE_FILENAME_PREFIX * prepended. */ private static final FilenameFilter cacheFileFilter = new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.startsWith(CACHE_FILENAME_PREFIX); } }; /** * Used to fetch an instance of DiskLruCache. * * @param context * @param cacheDir * @param maxByteSize * @return */ public static DiskLruCache openCache(Context context, File cacheDir, long maxByteSize) { if (!cacheDir.exists()) { cacheDir.mkdir(); } if (cacheDir.isDirectory() && cacheDir.canWrite() && Utils.getUsableSpace(cacheDir) > maxByteSize) { return new DiskLruCache(cacheDir, maxByteSize); } return null; } /** * Constructor that should not be called directly, instead use * {@link DiskLruCache#openCache(Context, File, long)} which runs some extra checks before * creating a DiskLruCache instance. * * @param cacheDir * @param maxByteSize */ private DiskLruCache(File cacheDir, long maxByteSize) { mCacheDir = cacheDir; maxCacheByteSize = maxByteSize; } /** * Add a bitmap to the disk cache. * * @param key A unique identifier for the bitmap. * @param data The bitmap to store. */ public void put(String key, Bitmap data) { synchronized (mLinkedHashMap) { if (mLinkedHashMap.get(key) == null) { try { final String file = createFilePath(mCacheDir, key); if (writeBitmapToFile(data, file)) { put(key, file); flushCache(); } } catch (final FileNotFoundException e) { Log.e(TAG, "Error in put: " + e.getMessage()); } catch (final IOException e) { Log.e(TAG, "Error in put: " + e.getMessage()); } } } } private void put(String key, String file) { mLinkedHashMap.put(key, file); cacheSize = mLinkedHashMap.size(); cacheByteSize += new File(file).length(); } /** * Flush the cache, removing oldest entries if the total size is over the specified cache size. * Note that this isn't keeping track of stale files in the cache directory that aren't in the * HashMap. If the images and keys in the disk cache change often then they probably won't ever * be removed. */ private void flushCache() { Entry<String, String> eldestEntry; File eldestFile; long eldestFileSize; int count = 0; while (count < MAX_REMOVALS && (cacheSize > maxCacheItemSize || cacheByteSize > maxCacheByteSize)) { eldestEntry = mLinkedHashMap.entrySet().iterator().next(); eldestFile = new File(eldestEntry.getValue()); eldestFileSize = eldestFile.length(); mLinkedHashMap.remove(eldestEntry.getKey()); eldestFile.delete(); cacheSize = mLinkedHashMap.size(); cacheByteSize -= eldestFileSize; count++; if (BuildConfig.DEBUG) { Log.d(TAG, "flushCache - Removed cache file, " + eldestFile + ", " + eldestFileSize); } } } /** * Get an image from the disk cache. * * @param key The unique identifier for the bitmap * @return The bitmap or null if not found */ public Bitmap get(String key) { synchronized (mLinkedHashMap) { final String file = mLinkedHashMap.get(key); if (file != null) { if (BuildConfig.DEBUG) { Log.d(TAG, "Disk cache hit"); } return BitmapFactory.decodeFile(file); } else { final String existingFile = createFilePath(mCacheDir, key); if (new File(existingFile).exists()) { put(key, existingFile); if (BuildConfig.DEBUG) { Log.d(TAG, "Disk cache hit (existing file)"); } return BitmapFactory.decodeFile(existingFile); } } return null; } } /** * Checks if a specific key exist in the cache. * * @param key The unique identifier for the bitmap * @return true if found, false otherwise */ public boolean containsKey(String key) { // See if the key is in our HashMap if (mLinkedHashMap.containsKey(key)) { return true; } // Now check if there's an actual file that exists based on the key final String existingFile = createFilePath(mCacheDir, key); if (new File(existingFile).exists()) { // File found, add it to the HashMap for future use put(key, existingFile); return true; } return false; } /** * Removes all disk cache entries from this instance cache dir */ public void clearCache() { DiskLruCache.clearCache(mCacheDir); } /** * Removes all disk cache entries from the application cache directory in the uniqueName * sub-directory. * * @param context The context to use * @param uniqueName A unique cache directory name to append to the app cache directory */ public static void clearCache(Context context, String uniqueName) { File cacheDir = getDiskCacheDir(context, uniqueName); clearCache(cacheDir); } /** * Removes all disk cache entries from the given directory. This should not be called directly, * call {@link DiskLruCache#clearCache(Context, String)} or {@link DiskLruCache#clearCache()} * instead. * * @param cacheDir The directory to remove the cache files from */ private static void clearCache(File cacheDir) { final File[] files = cacheDir.listFiles(cacheFileFilter); for (int i=0; i<files.length; i++) { files[i].delete(); } } /** * Get a usable cache directory (external if available, internal otherwise). * * @param context The context to use * @param uniqueName A unique directory name to append to the cache dir * @return The cache dir */ public static File getDiskCacheDir(Context context, String uniqueName) { // Check if media is mounted or storage is built-in, if so, try and use external cache dir // otherwise use internal cache dir final String cachePath = Environment.getExternalStorageState() == Environment.MEDIA_MOUNTED || !Utils.isExternalStorageRemovable() ? Utils.getExternalCacheDir(context).getPath() : context.getCacheDir().getPath(); return new File(cachePath + File.separator + uniqueName); } /** * Creates a constant cache file path given a target cache directory and an image key. * * @param cacheDir * @param key * @return */ public static String createFilePath(File cacheDir, String key) { try { // Use URLEncoder to ensure we have a valid filename, a tad hacky but it will do for // this example return cacheDir.getAbsolutePath() + File.separator + CACHE_FILENAME_PREFIX + URLEncoder.encode(key.replace("*", ""), "UTF-8"); } catch (final UnsupportedEncodingException e) { Log.e(TAG, "createFilePath - " + e); } return null; } /** * Create a constant cache file path using the current cache directory and an image key. * * @param key * @return */ public String createFilePath(String key) { return createFilePath(mCacheDir, key); } /** * Sets the target compression format and quality for images written to the disk cache. * * @param compressFormat * @param quality */ public void setCompressParams(CompressFormat compressFormat, int quality) { mCompressFormat = compressFormat; mCompressQuality = quality; } /** * Writes a bitmap to a file. Call {@link DiskLruCache#setCompressParams(CompressFormat, int)} * first to set the target bitmap compression and format. * * @param bitmap * @param file * @return */ private boolean writeBitmapToFile(Bitmap bitmap, String file) throws IOException, FileNotFoundException { OutputStream out = null; try { out = new BufferedOutputStream(new FileOutputStream(file), Utils.IO_BUFFER_SIZE); return bitmap.compress(mCompressFormat, mCompressQuality, out); } finally { if (out != null) { out.close(); } } } }
很奇怪,為啥我要把同一個類貼出三個源碼呢?因為它們各有不同之處。
最新版本的源碼來源網址也又很多,這個咱們不管。關鍵是它代碼中用到了很多別的類對象,用這個代碼的話就需要導入關聯的對象,而那些和它有關的類對象又相互依賴,十分麻煩。所以不是很推薦用第一個源碼,如果你非要用的話,可以下載我在文章最后放的源碼,在那里面我已經導入了與DiskLruCache有關的類對象。
BitmapFun的新版本源碼是完全獨立的,算是第一個源碼的精簡版,只有一個類,你直接復制到工程中就行,但這里有個不算問題的問題。這個緩存類的重點就是日志記錄,實現LruCache算法的關鍵是通過操作日志來看哪個對象經常被訪問,哪個對象沒怎么用過,這樣就可以有選擇性的刪除了。但問題就出在這個日志記錄上,要知道這個日志是寫在磁盤上的,要讀寫就必須要進行IO操作,IO操作又是眾所周知的慢,頻繁的IO操作必然會引起性能的下降。如果深入分析的話,你會發現讀取磁盤上的圖片(這里用圖片來舉例,可以是其他文件類型)應該是多線程的,但我們必須確保同一時間只能有一個線程操作日志,因此就必須加同步鎖。好了,加了同步鎖來進行IO操作固然安全多了,但效率又開始下降了。
BitmapFun老版本的源碼是不用日志文件的,直接從程序訪問sdcard緩存開始計算訪問時間,這樣就避免了頻繁的IO操作,算是一個簡單可行的辦法。這個代碼是從BitmapFun中提取的,所以它緩存的對象是bitmap,通用性不強,如果要用話請自行修改吧。
總結:第一個源碼和第二個源碼內部實現是完全一樣的,第二個源碼做代碼的整合,方便導入到項目中,所以請優先考慮用第二個源碼。如果你在用這個類的時候發現加載圖片的速度很慢,效率低,可以用第三個源碼來替換,以提高效率。實現圖片加載不建議用BitmapFun,效率不高。
1.2 分析
來源網址:http://blog.csdn.net/linghu_java/article/details/8600053
有注釋的源碼:

/* * Copyright (C) 2011 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.example.android.bitmapfun.util; import java.io.BufferedInputStream; import java.io.BufferedWriter; import java.io.Closeable; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.FilterOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.OutputStreamWriter; import java.io.Reader; import java.io.StringWriter; import java.io.Writer; import java.lang.reflect.Array; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; /** ****************************************************************************** * Taken from the JB source code, can be found in: * libcore/luni/src/main/java/libcore/io/DiskLruCache.java * or direct link: * https://android.googlesource.com/platform/libcore/+/android-4.1.1_r1/luni/src/main/java/libcore/io/DiskLruCache.java ****************************************************************************** * * A cache that uses a bounded amount of space on a filesystem. Each cache * entry has a string key and a fixed number of values. Values are byte * sequences, accessible as streams or files. Each value must be between {@code * 0} and {@code Integer.MAX_VALUE} bytes in length. * 一個使用空間大小有邊界的文件cache,每一個entry包含一個key和values。values是byte序列,按文件或者流來訪問的。 * 每一個value的長度在0---Integer.MAX_VALUE之間。 * * <p>The cache stores its data in a directory on the filesystem. This * directory must be exclusive to the cache; the cache may delete or overwrite * files from its directory. It is an error for multiple processes to use the * same cache directory at the same time. * cache使用目錄文件存儲數據。文件路徑必須是唯一的,可以刪除和重寫目錄文件。多個進程同時使用同樣的文件目錄是不正確的 * * <p>This cache limits the number of bytes that it will store on the * filesystem. When the number of stored bytes exceeds the limit, the cache will * remove entries in the background until the limit is satisfied. The limit is * not strict: the cache may temporarily exceed it while waiting for files to be * deleted. The limit does not include filesystem overhead or the cache * journal so space-sensitive applications should set a conservative limit. * cache限制了大小,當超出空間大小時,cache就會后台刪除entry直到空間沒有達到上限為止。空間大小限制不是嚴格的, * cache可能會暫時超過limit在等待文件刪除的過程中。cache的limit不包括文件系統的頭部和日志, * 所以空間大小敏感的應用應當設置一個保守的limit大小 * * <p>Clients call {@link #edit} to create or update the values of an entry. An * entry may have only one editor at one time; if a value is not available to be * edited then {@link #edit} will return null. * <ul> * <li>When an entry is being <strong>created</strong> it is necessary to * supply a full set of values; the empty value should be used as a * placeholder if necessary. * <li>When an entry is being <strong>edited</strong>, it is not necessary * to supply data for every value; values default to their previous * value. * </ul> * Every {@link #edit} call must be matched by a call to {@link Editor#commit} * or {@link Editor#abort}. Committing is atomic: a read observes the full set * of values as they were before or after the commit, but never a mix of values. *調用edit()來創建或者更新entry的值,一個entry同時只能有一個editor;如果值不可被編輯就返回null。 *當entry被創建時必須提供一個value。空的value應當用占位符表示。當entry被編輯的時候,必須提供value。 *每次調用必須有匹配Editor commit或abort,commit是原子操作,讀必須在commit前或者后,不會造成值混亂。 * * <p>Clients call {@link #get} to read a snapshot of an entry. The read will * observe the value at the time that {@link #get} was called. Updates and * removals after the call do not impact ongoing reads. * 調用get來讀entry的快照。當get調用時讀者讀其值,更新或者刪除不會影響先前的讀 * * <p>This class is tolerant of some I/O errors. If files are missing from the * filesystem, the corresponding entries will be dropped from the cache. If * an error occurs while writing a cache value, the edit will fail silently. * Callers should handle other problems by catching {@code IOException} and * responding appropriately. * 該類可以容忍一些I/O errors。如果文件丟失啦,相應的entry就會被drop。寫cache時如果error發生,edit將失敗。 * 調用者應當相應的處理其它問題 */ public final class DiskLruCache implements Closeable { static final String JOURNAL_FILE = "journal"; static final String JOURNAL_FILE_TMP = "journal.tmp"; static final String MAGIC = "libcore.io.DiskLruCache"; static final String VERSION_1 = "1"; static final long ANY_SEQUENCE_NUMBER = -1; private static final String CLEAN = "CLEAN"; private static final String DIRTY = "DIRTY"; private static final String REMOVE = "REMOVE"; private static final String READ = "READ"; private static final Charset UTF_8 = Charset.forName("UTF-8"); private static final int IO_BUFFER_SIZE = 8 * 1024;//8K /* * This cache uses a journal file named "journal". A typical journal file * looks like this: * libcore.io.DiskLruCache * 1 //the disk cache's version * 100 //the application's version * 2 //value count * * //state key optional * CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 * DIRTY 335c4c6028171cfddfbaae1a9c313c52 * CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 * REMOVE 335c4c6028171cfddfbaae1a9c313c52 * DIRTY 1ab96a171faeeee38496d8b330771a7a * CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 * READ 335c4c6028171cfddfbaae1a9c313c52 * READ 3400330d1dfc7f3f7f4b8d4d803dfcf6 * * The first five lines of the journal form its header. They are the * constant string "libcore.io.DiskLruCache", the disk cache's version, * the application's version, the value count, and a blank line. * * Each of the subsequent lines in the file is a record of the state of a * cache entry. Each line contains space-separated values: a state, a key, * and optional state-specific values. * o DIRTY lines track that an entry is actively being created or updated. * Every successful DIRTY action should be followed by a CLEAN or REMOVE * action. DIRTY lines without a matching CLEAN or REMOVE indicate that * temporary files may need to be deleted. * Dirty是entry被創建或者更新,每一個dirty應當被clean或remove action,如果有一行dirty沒有 * 匹配的clean或Remove action,就表示臨時文件需要被刪除。 * o CLEAN lines track a cache entry that has been successfully published * and may be read. A publish line is followed by the lengths of each of * its values. * Clean entry已經成功的發布並且可能會被讀過。一個發布行 * o READ lines track accesses for LRU. * Read表示LRU訪問 * o REMOVE lines track entries that have been deleted. * Remove表示entry已經被刪除 * * The journal file is appended to as cache operations occur. The journal may * occasionally be compacted by dropping redundant lines. A temporary file named * "journal.tmp" will be used during compaction; that file should be deleted if * it exists when the cache is opened. * 日志文件在cache操作發生時添加,日志可能O爾刪除的冗余行來壓縮。一個臨時的名字為journal.tmp的文件將被使用 * 在壓縮期間。當cache被opened的時候文件應當被刪除。 */ private final File directory; private final File journalFile;//日志文件 private final File journalFileTmp;//日志文件臨時文件 private final int appVersion;//應用ersion private final long maxSize;//最大空間 private final int valueCount;//key對應的value的個數 private long size = 0; private Writer journalWriter; private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0, 0.75f, true); private int redundantOpCount; /** * To differentiate between old and current snapshots, each entry is given * a sequence number each time an edit is committed. A snapshot is stale if * its sequence number is not equal to its entry's sequence number. * 區分老的和當前的快照,每一個實體在每次編輯被committed時都被賦予一個序列號。 * 一個快照的序列號如果不等於entry的序列號那它就是廢棄的。 */ private long nextSequenceNumber = 0; //數組拷貝 /* From java.util.Arrays */ @SuppressWarnings("unchecked") private static <T> T[] copyOfRange(T[] original, int start, int end) { final int originalLength = original.length; // For exception priority compatibility. if (start > end) { throw new IllegalArgumentException(); } if (start < 0 || start > originalLength) { throw new ArrayIndexOutOfBoundsException(); } final int resultLength = end - start; final int copyLength = Math.min(resultLength, originalLength - start); final T[] result = (T[]) Array .newInstance(original.getClass().getComponentType(), resultLength); System.arraycopy(original, start, result, 0, copyLength); return result; } /** * Returns the remainder of 'reader' as a string, closing it when done. * 返回String的值,然后close */ public static String readFully(Reader reader) throws IOException { try { StringWriter writer = new StringWriter(); char[] buffer = new char[1024]; int count; while ((count = reader.read(buffer)) != -1) { writer.write(buffer, 0, count); } return writer.toString(); } finally { reader.close(); } } /** * Returns the ASCII characters up to but not including the next "\r\n", or * "\n". * * @throws java.io.EOFException if the stream is exhausted before the next newline * character. * 讀取輸入流中返回的某行ASCII碼字符 */ public static String readAsciiLine(InputStream in) throws IOException { // TODO: support UTF-8 here instead StringBuilder result = new StringBuilder(80); while (true) { int c = in.read(); if (c == -1) { throw new EOFException(); } else if (c == '\n') { break; } result.append((char) c); } int length = result.length(); if (length > 0 && result.charAt(length - 1) == '\r') { result.setLength(length - 1); } return result.toString(); } /** * Closes 'closeable', ignoring any checked exceptions. Does nothing if 'closeable' is null. * closeable關閉 */ public static void closeQuietly(Closeable closeable) { if (closeable != null) { try { closeable.close(); } catch (RuntimeException rethrown) { throw rethrown; } catch (Exception ignored) { } } } /** * Recursively delete everything in {@code dir}. * 遞歸刪除dir */ // TODO: this should specify paths as Strings rather than as Files public static void deleteContents(File dir) throws IOException { File[] files = dir.listFiles(); if (files == null) { throw new IllegalArgumentException("not a directory: " + dir); } for (File file : files) { if (file.isDirectory()) { deleteContents(file); } if (!file.delete()) { throw new IOException("failed to delete file: " + file); } } } /** This cache uses a single background thread to evict entries. * 后台單線程回收entry */ private final ExecutorService executorService = new ThreadPoolExecutor(0, 1, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); private final Callable<Void> cleanupCallable = new Callable<Void>() { @Override public Void call() throws Exception { synchronized (DiskLruCache.this) { if (journalWriter == null) { return null; // closed } trimToSize();//回收到滿足maxsize if (journalRebuildRequired()) { rebuildJournal(); redundantOpCount = 0; } } return null; } }; //構造器 private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) { this.directory = directory; this.appVersion = appVersion; this.journalFile = new File(directory, JOURNAL_FILE); this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP); this.valueCount = valueCount; this.maxSize = maxSize; } /** * Opens the cache in {@code directory}, creating a cache if none exists * there. * 創建cache * * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * 每一個key相對應的value的數目 * @param maxSize the maximum number of bytes this cache should use to store * @throws IOException if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException { if (maxSize <= 0) {//maxsize必須大於0 throw new IllegalArgumentException("maxSize <= 0"); } if (valueCount <= 0) {//valuecount也必須大於0 throw new IllegalArgumentException("valueCount <= 0"); } // prefer to pick up where we left off優先處理先前的cache DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); if (cache.journalFile.exists()) { try { cache.readJournal(); cache.processJournal(); cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true), IO_BUFFER_SIZE); return cache; } catch (IOException journalIsCorrupt) { // System.logW("DiskLruCache " + directory + " is corrupt: " // + journalIsCorrupt.getMessage() + ", removing"); cache.delete(); } } // create a new empty cache創建一個空新的cache directory.mkdirs(); cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); cache.rebuildJournal(); return cache; } //讀取日志信息 private void readJournal() throws IOException { InputStream in = new BufferedInputStream(new FileInputStream(journalFile), IO_BUFFER_SIZE); try { String magic = readAsciiLine(in); String version = readAsciiLine(in); String appVersionString = readAsciiLine(in); String valueCountString = readAsciiLine(in); String blank = readAsciiLine(in); if (!MAGIC.equals(magic) || !VERSION_1.equals(version) || !Integer.toString(appVersion).equals(appVersionString) || !Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) { throw new IOException("unexpected journal header: [" + magic + ", " + version + ", " + valueCountString + ", " + blank + "]"); } while (true) { try { readJournalLine(readAsciiLine(in));//讀取日志信息 } catch (EOFException endOfJournal) { break; } } } finally { closeQuietly(in);//關閉輸入流 } } //讀取日志中某行日志信息 private void readJournalLine(String line) throws IOException { String[] parts = line.split(" "); if (parts.length < 2) { throw new IOException("unexpected journal line: " + line); } String key = parts[1]; if (parts[0].equals(REMOVE) && parts.length == 2) { lruEntries.remove(key); return; } Entry entry = lruEntries.get(key); if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) { entry.readable = true; entry.currentEditor = null; entry.setLengths(copyOfRange(parts, 2, parts.length)); } else if (parts[0].equals(DIRTY) && parts.length == 2) { entry.currentEditor = new Editor(entry); } else if (parts[0].equals(READ) && parts.length == 2) { // this work was already done by calling lruEntries.get() } else { throw new IOException("unexpected journal line: " + line); } } /** * Computes the initial size and collects garbage as a part of opening the * cache. Dirty entries are assumed to be inconsistent and will be deleted. * 處理日志 * 計算初始化cache的初始化大小和收集垃圾。Dirty entry假定不一致將會被刪掉。 */ private void processJournal() throws IOException { deleteIfExists(journalFileTmp);//刪除日志文件 for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) { Entry entry = i.next(); if (entry.currentEditor == null) { for (int t = 0; t < valueCount; t++) { size += entry.lengths[t]; } } else { entry.currentEditor = null; for (int t = 0; t < valueCount; t++) { deleteIfExists(entry.getCleanFile(t)); deleteIfExists(entry.getDirtyFile(t)); } i.remove(); } } } /** * Creates a new journal that omits redundant information. This replaces the * current journal if it exists. * 創建一個新的刪掉冗余信息的日志。替換當前的日志 */ private synchronized void rebuildJournal() throws IOException { if (journalWriter != null) { journalWriter.close(); } Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE); writer.write(MAGIC); writer.write("\n"); writer.write(VERSION_1); writer.write("\n"); writer.write(Integer.toString(appVersion)); writer.write("\n"); writer.write(Integer.toString(valueCount)); writer.write("\n"); writer.write("\n"); for (Entry entry : lruEntries.values()) { if (entry.currentEditor != null) { writer.write(DIRTY + ' ' + entry.key + '\n'); } else { writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); } } writer.close(); journalFileTmp.renameTo(journalFile); journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE); } //文件若存在刪除 private static void deleteIfExists(File file) throws IOException { // try { // Libcore.os.remove(file.getPath()); // } catch (ErrnoException errnoException) { // if (errnoException.errno != OsConstants.ENOENT) { // throw errnoException.rethrowAsIOException(); // } // } if (file.exists() && !file.delete()) { throw new IOException(); } } /** * Returns a snapshot of the entry named {@code key}, or null if it doesn't * exist is not currently readable. If a value is returned, it is moved to * the head of the LRU queue. * 返回key對應的entry的snapshot,當key相應的entry不存在或者當前不可讀時返回null。 * 如果返回相應的值,它就會被移動到LRU隊列的頭部。 */ public synchronized Snapshot get(String key) throws IOException { checkNotClosed();//檢查cache是否已關閉 validateKey(key);//驗證key格式的正確性 Entry entry = lruEntries.get(key); if (entry == null) { return null; } if (!entry.readable) { return null; } /* * Open all streams eagerly to guarantee that we see a single published * snapshot. If we opened streams lazily then the streams could come * from different edits. */ InputStream[] ins = new InputStream[valueCount]; try { for (int i = 0; i < valueCount; i++) { ins[i] = new FileInputStream(entry.getCleanFile(i)); } } catch (FileNotFoundException e) { // a file must have been deleted manually! return null; } redundantOpCount++; journalWriter.append(READ + ' ' + key + '\n'); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return new Snapshot(key, entry.sequenceNumber, ins); } /** * Returns an editor for the entry named {@code key}, or null if another * edit is in progress. */ public Editor edit(String key) throws IOException { return edit(key, ANY_SEQUENCE_NUMBER); } //有key和序列號生成一個editor private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException { checkNotClosed();//檢查cache關閉與否 validateKey(key);//驗證key格式正確性 Entry entry = lruEntries.get(key); if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) { return null; // snapshot is stale } if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } else if (entry.currentEditor != null) { return null; // another edit is in progress } Editor editor = new Editor(entry); entry.currentEditor = editor; // flush the journal before creating files to prevent file leaks journalWriter.write(DIRTY + ' ' + key + '\n'); journalWriter.flush(); return editor; } /** * Returns the directory where this cache stores its data. */ public File getDirectory() { return directory; } /** * Returns the maximum number of bytes that this cache should use to store * its data. */ public long maxSize() { return maxSize; } /** * Returns the number of bytes currently being used to store the values in * this cache. This may be greater than the max size if a background * deletion is pending. */ public synchronized long size() { return size; } //完成Edit動作 private synchronized void completeEdit(Editor editor, boolean success) throws IOException { Entry entry = editor.entry; if (entry.currentEditor != editor) { throw new IllegalStateException(); } // if this edit is creating the entry for the first time, every index must have a value if (success && !entry.readable) { for (int i = 0; i < valueCount; i++) { if (!entry.getDirtyFile(i).exists()) { editor.abort(); throw new IllegalStateException("edit didn't create file " + i); } } } for (int i = 0; i < valueCount; i++) { File dirty = entry.getDirtyFile(i); if (success) { if (dirty.exists()) { File clean = entry.getCleanFile(i); dirty.renameTo(clean); long oldLength = entry.lengths[i]; long newLength = clean.length(); entry.lengths[i] = newLength; size = size - oldLength + newLength; } } else { deleteIfExists(dirty); } } redundantOpCount++; entry.currentEditor = null; if (entry.readable | success) { entry.readable = true; journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); if (success) { entry.sequenceNumber = nextSequenceNumber++; } } else { lruEntries.remove(entry.key); journalWriter.write(REMOVE + ' ' + entry.key + '\n'); } if (size > maxSize || journalRebuildRequired()) { executorService.submit(cleanupCallable); } } /** * We only rebuild the journal when it will halve the size of the journal * and eliminate at least 2000 ops. * 當日志大小減半並且刪掉至少2000項時重新構造日志 */ private boolean journalRebuildRequired() { final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000; return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD && redundantOpCount >= lruEntries.size(); } /** * Drops the entry for {@code key} if it exists and can be removed. Entries * actively being edited cannot be removed. * 刪除key相應的entry,被編輯的Entry不能被remove * @return true if an entry was removed. */ public synchronized boolean remove(String key) throws IOException { checkNotClosed();//檢查cache是否已經關閉 validateKey(key);//驗證key格式的正確性 Entry entry = lruEntries.get(key); if (entry == null || entry.currentEditor != null) { return false; } for (int i = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); if (!file.delete()) { throw new IOException("failed to delete " + file); } size -= entry.lengths[i]; entry.lengths[i] = 0; } redundantOpCount++; journalWriter.append(REMOVE + ' ' + key + '\n'); lruEntries.remove(key); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return true; } /** * Returns true if this cache has been closed. * 判斷cache是否已經關閉 */ public boolean isClosed() { return journalWriter == null; } //檢查cache是否已經關閉 private void checkNotClosed() { if (journalWriter == null) { throw new IllegalStateException("cache is closed"); } } /** * Force buffered operations to the filesystem. */ public synchronized void flush() throws IOException { checkNotClosed();//檢查cache是否關閉 trimToSize();//滿足最大空間limit journalWriter.flush(); } /** * Closes this cache. Stored values will remain on the filesystem. * 關閉cache。 */ public synchronized void close() throws IOException { if (journalWriter == null) { return; // already closed } for (Entry entry : new ArrayList<Entry>(lruEntries.values())) { if (entry.currentEditor != null) { entry.currentEditor.abort(); } } trimToSize(); journalWriter.close(); journalWriter = null; } //回收刪除某些entry到空間大小滿足maxsize private void trimToSize() throws IOException { while (size > maxSize) { // Map.Entry<String, Entry> toEvict = lruEntries.eldest(); final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next(); remove(toEvict.getKey()); } } /** * Closes the cache and deletes all of its stored values. This will delete * all files in the cache directory including files that weren't created by * the cache. * 關閉刪除cache */ public void delete() throws IOException { close(); deleteContents(directory); } //驗證key格式的正確性 private void validateKey(String key) { if (key.contains(" ") || key.contains("\n") || key.contains("\r")) { throw new IllegalArgumentException( "keys must not contain spaces or newlines: \"" + key + "\""); } } //字符串形式讀出輸入流的內容 private static String inputStreamToString(InputStream in) throws IOException { return readFully(new InputStreamReader(in, UTF_8)); } /** * A snapshot of the values for an entry. * entry的快照 */ public final class Snapshot implements Closeable { private final String key;//key private final long sequenceNumber;//序列號(同文件名稱) private final InputStream[] ins;//兩個修改的文件輸入流 private Snapshot(String key, long sequenceNumber, InputStream[] ins) { this.key = key; this.sequenceNumber = sequenceNumber; this.ins = ins; } /** * Returns an editor for this snapshot's entry, or null if either the * entry has changed since this snapshot was created or if another edit * is in progress. * 返回entry快照的editor,如果entry已經更新了或者另一個edit正在處理過程中返回null。 */ public Editor edit() throws IOException { return DiskLruCache.this.edit(key, sequenceNumber); } /** * Returns the unbuffered stream with the value for {@code index}. */ public InputStream getInputStream(int index) { return ins[index]; } /** * Returns the string value for {@code index}. */ public String getString(int index) throws IOException { return inputStreamToString(getInputStream(index)); } @Override public void close() { for (InputStream in : ins) { closeQuietly(in); } } } /** * Edits the values for an entry. * entry編輯器 */ public final class Editor { private final Entry entry; private boolean hasErrors; private Editor(Entry entry) { this.entry = entry; } /** * Returns an unbuffered input stream to read the last committed value, * or null if no value has been committed. * 返回一個最后提交的entry的不緩存輸入流,如果沒有值被提交過返回null */ public InputStream newInputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } if (!entry.readable) { return null; } return new FileInputStream(entry.getCleanFile(index)); } } /** * Returns the last committed value as a string, or null if no value * has been committed. * 返回最后提交的entry的文件內容,字符串形式 */ public String getString(int index) throws IOException { InputStream in = newInputStream(index); return in != null ? inputStreamToString(in) : null; } /** * Returns a new unbuffered output stream to write the value at * {@code index}. If the underlying output stream encounters errors * when writing to the filesystem, this edit will be aborted when * {@link #commit} is called. The returned output stream does not throw * IOExceptions. * 返回一個新的無緩沖的輸出流,寫文件時如果潛在的輸出流存在錯誤,這個edit將被廢棄。 */ public OutputStream newOutputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index))); } } /** * Sets the value at {@code index} to {@code value}. * 設置entry的value的文件的內容 */ public void set(int index, String value) throws IOException { Writer writer = null; try { writer = new OutputStreamWriter(newOutputStream(index), UTF_8); writer.write(value); } finally { closeQuietly(writer); } } /** * Commits this edit so it is visible to readers. This releases the * edit lock so another edit may be started on the same key. * commit提交編輯的結果,釋放edit鎖然后其它edit可以啟動 */ public void commit() throws IOException { if (hasErrors) { completeEdit(this, false); remove(entry.key); // the previous entry is stale } else { completeEdit(this, true); } } /** * Aborts this edit. This releases the edit lock so another edit may be * started on the same key. * 廢棄edit,釋放edit鎖然后其它edit可以啟動 */ public void abort() throws IOException { completeEdit(this, false); } //包裝的輸出流類 private class FaultHidingOutputStream extends FilterOutputStream { private FaultHidingOutputStream(OutputStream out) { super(out); } @Override public void write(int oneByte) { try { out.write(oneByte); } catch (IOException e) { hasErrors = true; } } @Override public void write(byte[] buffer, int offset, int length) { try { out.write(buffer, offset, length); } catch (IOException e) { hasErrors = true; } } @Override public void close() { try { out.close(); } catch (IOException e) { hasErrors = true; } } @Override public void flush() { try { out.flush(); } catch (IOException e) { hasErrors = true; } } } } /* * Entry 最終類 */ private final class Entry { private final String key; /** Lengths of this entry's files. */ private final long[] lengths;//每一個cache文件的長度 /** True if this entry has ever been published */ private boolean readable; /** The ongoing edit or null if this entry is not being edited. */ private Editor currentEditor; /** The sequence number of the most recently committed edit to this entry. */ private long sequenceNumber; private Entry(String key) { this.key = key; this.lengths = new long[valueCount]; } public String getLengths() throws IOException { StringBuilder result = new StringBuilder(); for (long size : lengths) { result.append(' ').append(size); } return result.toString(); } /** * Set lengths using decimal numbers like "10123". * 設置每一個cache文件的長度(即lengths[i]的長度) */ private void setLengths(String[] strings) throws IOException { if (strings.length != valueCount) { throw invalidLengths(strings); } try { for (int i = 0; i < strings.length; i++) { lengths[i] = Long.parseLong(strings[i]); } } catch (NumberFormatException e) { throw invalidLengths(strings); } } private IOException invalidLengths(String[] strings) throws IOException { throw new IOException("unexpected journal line: " + Arrays.toString(strings)); } public File getCleanFile(int i) { return new File(directory, key + "." + i); } public File getDirtyFile(int i) { return new File(directory, key + "." + i + ".tmp"); } } }
重點思路總結:
- 你作為一個緩存,你自然要有大小,這個大小應該是可以讓用戶(程序員)設定的。
- 緩存也應該有索引,這里的索引是通過key-value進行的,這個value是byte序列。也就是說我們可以存放任何可以轉換為比特序列的對象,文本、bitmap自然都是可以的。存放的時候給每個對象設定一個獨一無二的key就行。
- 讀取數據的時候應該是用這個key來讀取的,因為是磁盤操作,那么自然讀出來的就是一個輸入流(stream),我們對這個流進行處理,自然就能將其變為我們想要的對象了。
- 這個類是用日志來記錄操作的,通過分析日志就知道什么對象經常被操作,所以這個日志至關重要,必須要保證同一時間僅僅能有一個線程進行訪問。
- 這個類允許一些IO異常,IO出現異常是很正常的,所以我們在用這個類的時候應該在那些拋出異常的方法中進行適當的處理
- 雖然我們設定了緩存的大小,當緩存文件的大小稍稍大於這個設定大小后並不一定會被清理,也就說具體的緩存可能略大於你設定的緩存大小,如果你不是十分在意磁盤空間,可以不用管它。
- 我們用日志文件來記錄操作,這個日志文件也應該會自動壓縮,否則就會越來越大。當這個日志文件需要被壓縮的時候,應該能自動刪除一些冗余行,比如那些操作失敗留下的記錄。
- 我們讀取文件的操作應該不影響文件的寫操作,所以讀取的文件可能是文件的快照。
- 進行IO操作時必須要考慮字符編碼和關閉IO流的問題,所以這個類應該能提供編碼格式和關閉流的操作。
二、初始化DiskLruCache
將上面第二個源碼復制到你的項目中,然后完成各種導入即可。
2.1 設置緩存大小
private DiskLruCache mDiskCache; private static final int DISK_CACHE_SIZE = 1024 * 1024 * 10; // 10MB 磁盤緩存的大小為10M private static final String DISK_CACHE_SUBDIR = "bitmap"; // 這里我們緩存的類型是bitmap
現在定義好了緩存的大小,例子中是10m,我們這個例子中想要緩存的是bitmap(圖片),所以就給緩存文件叫這個名字了。如果你想要緩存txt之類的,可以用自定義別的名字,比如String。
2.2 設定緩存目錄
/** * @description 獲得磁盤緩存的路徑 * 當SD卡存在或SD卡不可被移除的時,就用getExternalCacheDir()方法來獲取緩存路徑,否則就調用getCacheDir()方法來獲取緩存路徑。 * getExternalCacheDir()-> /sdcard/Android/data/<application * package>/cache getCacheDir() -> /data/data/<application package>/cache * @param context * @param uniqueName 緩存文件夾名稱,可以自定義。這里我們緩存的是bitmap,所以緩存文件夾名定義為:bitmap * @return */ private File getDiskCacheDir(Context context, String uniqueName) { String cachePath; if (Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState()) || !Environment.isExternalStorageRemovable()) { cachePath = context.getExternalCacheDir().getPath(); } else { cachePath = context.getCacheDir().getPath(); } return new File(cachePath + File.separator + uniqueName); }
這個方法設定了緩存的地址,設置地址也是有講究的。DiskLruCache並沒有限制數據的緩存位置,可以自由地進行設定,但是通常情況下應用程序都會將緩存的位置選擇為 /sdcard/Android/data/<application package>/cache 這個路徑。
選擇在這個位置有兩點好處:
第一,這是存儲在SD卡上的,因此即使緩存再多的數據也不會對手機的內置存儲空間有任何影響,只要SD卡空間足夠就行。
第二,這個路徑被Android系統認定為應用程序的緩存路徑,當程序被卸載的時候,這里的數據也會一起被清除掉,這樣就不會出現刪除程序之后手機上還有很多殘留數據的問題。
2.3 初始化
初始化前需要通過方法來得到當前應用的版本代碼,當你應用升級時,這個緩存目錄會被清空。
/** * @description 得到程序的版本號 * 注意:每當版本號改變,緩存路徑下存儲的所有數據都會被清除掉 * * @param context * @return */ private int getAppVersion(Context context) { try { PackageInfo info = context.getPackageManager().getPackageInfo(context.getPackageName(), 0); return info.versionCode; } catch (NameNotFoundException e) { e.printStackTrace(); } return 1; }
DiskLruCache類需要用過open方法來獲得實例
/** * Opens the cache in {@code directory}, creating a cache if none exists * there. * * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * @param maxSize the maximum number of bytes this cache should use to store * @throws IOException if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException
第一個參數是緩存的目錄,第二個是應用程序的版本,第三個參數指定同一個key可以對應多少個緩存文件,基本都是傳1(一對一,key-value),最后一個是緩存的大小,這個例子中我們定義的是10M。所以我們就有了下面這個初始化方法體。
/** * @description 初始化磁盤緩存 * */ private void initCache() { try { File cacheDir = getDiskCacheDir(this, DISK_CACHE_SUBDIR); if (!cacheDir.exists()) { cacheDir.mkdirs(); } mDiskCache = DiskLruCache.open(cacheDir, getAppVersion(this), 1, DISK_CACHE_SIZE); } catch (IOException e) { e.printStackTrace(); } }
首先判斷緩存根目錄是否存在,如果不存在就創建。然后通過這個靜態的open(…)方法來產生DiskLruCache實例。僅僅這樣我還覺得不爽,為啥必須要用open方法才能創建呢?不能用構造函數么?下面去看源碼:
構造方法
有人說這個不是有構造方法么?是的,但這個方法體被private標注了,所以我們沒辦法直接用。我們也能隱約感覺到open方法會調用這個構造方法。
private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) { this.directory = directory; this.appVersion = appVersion; this.journalFile = new File(directory, JOURNAL_FILE); this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP); this.valueCount = valueCount; this.maxSize = maxSize; }
open(…)
/** * Opens the cache in {@code directory}, creating a cache if none exists * there. * * @param directory * a writable directory * @param appVersion * @param valueCount * the number of values per cache entry. Must be positive. * @param maxSize * the maximum number of bytes this cache should use to store * @throws IOException * if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } if (valueCount <= 0) { throw new IllegalArgumentException("valueCount <= 0"); } // prefer to pick up where we left off DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); if (cache.journalFile.exists()) { try { cache.readJournal(); cache.processJournal(); cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true)); return cache; } catch (IOException journalIsCorrupt) { Log.w(TAG, "DiskLruCache " + directory + " is corrupt: " + journalIsCorrupt.getMessage() + ", removing"); cache.delete(); } } // create a new empty cache directory.mkdirs(); cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); cache.rebuildJournal(); return cache; }
哈哈,終於看到本質了。它用open的目的是進行各種檢查,保證傳入構造函數的參數是正確的,避免出錯。這里的檢查有判斷版本號,判斷緩存目錄等。如果你的應用之前就已經采用了緩存,這個緩存就開始識別你之前的緩存文件,如果應用之前沒進行緩存,那么就創立一個空的緩存。
三、寫入磁盤緩存
3.1 從網絡下載圖片
我們既然要緩存網絡圖片,那自然就要下載一個,下載好的圖片肯定是流。
package com.kale.cache; import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.IOException; import java.io.OutputStream; import java.net.HttpURLConnection; import java.net.URL; public class DownLoadPic { /** * @description 下載這個url的文件,並將圖片放入輸出流 * * @param urlString * @param outputStream * @return */ public static boolean downloadUrlToStream(String urlString, OutputStream outputStream) { HttpURLConnection urlConnection = null; BufferedOutputStream out = null; BufferedInputStream in = null; try { final URL url = new URL(urlString); urlConnection = (HttpURLConnection) url.openConnection(); in = new BufferedInputStream(urlConnection.getInputStream(), 8 * 1024); out = new BufferedOutputStream(outputStream, 8 * 1024); int b; while ((b = in.read()) != -1) { out.write(b); } return true; } catch (final IOException e) { e.printStackTrace(); } finally { if (urlConnection != null) { urlConnection.disconnect(); } try { if (out != null) { out.close(); } if (in != null) { in.close(); } } catch (final IOException e) { e.printStackTrace(); } } return false; } }
通過傳入url,調用urlConnection來進行下載處理;傳入輸出流,將處理的結果放入輸出流。
3.2 處理緩存文件的key
現在我們有了數據流,開始思考如何存入磁盤。最上面我們分析了,存入數據必須要一個索引,這個索引應該是獨一無二的,並且和數據一一對應。現在我們存放的是圖片,自然會想到了圖片獨一無二的url,這個url(https://avatars2.githubusercontent.com/u/9552155?v=3&u=0d33d696e757f6f529674aeb3f532f6e20f28a4b&s=140)符合key的標准,但不符合文件名的標准。所以我們想到了給這個url進行編碼,讓其比較規整。用MD5進行編碼是可行的,但如果你的key完全不需要加密,你可以選擇其他的編碼方式,讓最終編碼的結果沒有需要轉譯的字符,這里我就用了MD5來進行編碼。
package com.kale.cache; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class MD5 { /* * MD5加密 */ public static String getMD5Str(String str) { MessageDigest messageDigest = null; try { messageDigest = MessageDigest.getInstance("MD5"); messageDigest.reset(); messageDigest.update(str.getBytes("UTF-8")); } catch (NoSuchAlgorithmException e) { System.out.println("NoSuchAlgorithmException caught!"); System.exit(-1); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } byte[] byteArray = messageDigest.digest(); StringBuffer md5StrBuff = new StringBuffer(); for (int i = 0; i < byteArray.length; i++) { if (Integer.toHexString(0xFF & byteArray[i]).length() == 1) md5StrBuff.append("0").append(Integer.toHexString(0xFF & byteArray[i])); else md5StrBuff.append(Integer.toHexString(0xFF & byteArray[i])); } // 16位加密,從第9位到25位 return md5StrBuff.substring(8, 24).toString().toUpperCase(); } }
現在我們就可以通過MD5算法來加密url,得到的字符串絕對是符合文件名規則的。要知道這里我們處理的key就是未來緩存文件的文件名。
String key = MD5.getMD5Str(url); // 通過md5加密了這個URL,生成一個key
3.3 寫入緩存
寫入緩存是用
/** * Returns an editor for the entry named {@code key}, or null if another * edit is in progress. */ public Editor edit(String key) throws IOException
這個edit來做的,傳入key后會得到一個DiskLruCache.Editor對象。這個editor對象有newOutputStream()這個方法,是得到一個輸出流。這里我們傳如的參數是0
/** * Returns a new unbuffered output stream to write the value at * {@code index}. If the underlying output stream encounters errors when * writing to the filesystem, this edit will be aborted when * {@link #commit} is called. The returned output stream does not throw * IOExceptions. */ public OutputStream newOutputStream(int index) throws IOException
我們現在就可以把下載文件得到的輸出流和這個editor產生的輸出流進行對應,在下載時傳入editor的輸出流,下載的結果就保存在這個流中,最終就可以緩存到磁盤中了。
代碼片段:
@Override protected Boolean doInBackground(String... urls) { String key = MD5.getMD5Str(urls[0]); // 通過md5加密了這個URL,生成一個key boolean result = false; try { DiskLruCache.Editor editor = mDiskLruCache.edit(key); // 產生一個editor對象 if (editor != null) { OutputStream outputStream = editor.newOutputStream(0); // 產生一個輸出流 if (DownLoadPic.downloadUrlToStream(urls[0], outputStream)) { // 從這個url下載圖片,將editor的輸出流傳入 editor.commit(); // 提交,寫入磁盤中 result = true; } else { editor.abort(); // 出錯,撤銷操作 result = false; } } // mDiskLruCache.flush(); } catch (IOException e) { // TODO 自動生成的 catch 塊 e.printStackTrace(); } return result; }
已經緩存到sd卡中了。文件夾稱是bitmap,緩存文件名是MD5編碼的結果,journal是日志文件。
3.4 完整代碼
package com.kale.cache; import java.io.IOException; import java.io.OutputStream; import libcore.io.DiskLruCache; import android.app.Activity; import android.content.Context; import android.os.AsyncTask; import android.widget.Toast; public class SetTask extends AsyncTask<String, Integer, Boolean> { private DiskLruCache mDiskLruCache; private Context context; public SetTask(Activity activity, DiskLruCache cache) { context = activity; mDiskLruCache = cache; } @Override protected Boolean doInBackground(String... urls) { String key = MD5.getMD5Str(urls[0]); // 通過md5加密了這個URL,生成一個key boolean result = false; try { DiskLruCache.Editor editor = mDiskLruCache.edit(key); // 產生一個editor對象 if (editor != null) { OutputStream outputStream = editor.newOutputStream(0); // 產生一個輸出流 if (DownLoadPic.downloadUrlToStream(urls[0], outputStream)) { // 從這個url下載圖片,將editor的輸出流傳入 editor.commit(); // 提交,寫入磁盤中 result = true; } else { editor.abort(); // 出錯,撤銷操作 result = false; } } // mDiskLruCache.flush(); } catch (IOException e) { // TODO 自動生成的 catch 塊 e.printStackTrace(); } return result; } @Override protected void onPostExecute(Boolean result) { // TODO 自動生成的方法存根 super.onPostExecute(result); if (result) { Toast.makeText(context, "已經寫入磁盤", 0).show(); }else { Toast.makeText(context, "出錯!!!", 0).show(); } } }
四、讀取磁盤緩存
取出磁盤緩存的方法是:
/** * Returns a snapshot of the entry named {@code key}, or null if it doesn't * exist is not currently readable. If a value is returned, it is moved to * the head of the LRU queue. */ public synchronized Snapshot get(String key) throws IOException
英文注釋已經很清楚了,通過key來得到snapshot對象,如果是null,則說明當前沒有這個key對應的文件,如果返回了結果,這個數據就被移動到最近最少使用隊列的頭部,也就表明是剛用過的,不會被清除掉。
然后,我們就可以用DiskLruCache.Snapshot的getInputStream()方法得到輸入流了,這樣就能把我們緩存的數據讀進來。因為之前傳入的參數是0,所以這里讀出時傳參數0
代碼片段:
DiskLruCache.Snapshot snapShot = mDiskLruCache.get(key);
if (snapShot != null) { InputStream is = snapShot.getInputStream(0); bitmap = BitmapFactory.decodeStream(is); }
完整代碼:
public class GetTask extends AsyncTask<String, Integer, Bitmap> { private DiskLruCache mDiskLruCache; private ImageView mImageView; public GetTask(ImageView imageView, DiskLruCache cache) { mImageView = imageView; mDiskLruCache = cache; } @Override protected Bitmap doInBackground(String... urls) { String key = MD5.getMD5Str(urls[0]); // 通過md5加密了這個URL,生成一個key Bitmap bitmap = null; try { DiskLruCache.Snapshot snapShot = mDiskLruCache.get(key); if (snapShot != null) { InputStream is = snapShot.getInputStream(0); bitmap = BitmapFactory.decodeStream(is); } } catch (IOException e) { e.printStackTrace(); } return bitmap; } @Override protected void onPostExecute(Bitmap result) { mImageView.setImageBitmap(result); } }
當點擊get pic時從磁盤開始讀取圖片信息,最終展現在imageview中
五、其他API
5.1 size()
/** * Returns the number of bytes currently being used to store the values in * this cache. This may be greater than the max size if a background * deletion is pending. */ public synchronized long size() { return size; }
這個方法會返回當前緩存路徑下所有緩存數據的總字節數,以byte為單位,如果應用程序中需要在界面上顯示當前緩存數據的總大小,就可以通過調用這個方法計算出來。具體返回的大小可能略大於設置的cahce大小(在后台有數據操作的情況下)。你可以將這個數據顯示在app的設置界面中,讓用戶知道應用占用的緩存大小。
5.2 flush()
這個方法用於將內存中的操作記錄同步到日志文件(也就是journal文件)當中。這個方法非常重要,因為DiskLruCache能夠正常工作的前提就是要依賴於journal文件中的內容。前面在講解寫入緩存操作的時候有調用過一次這個方法,但其實並不是每次寫入緩存都要調用一次flush()方法的,頻繁地調用並不會帶來任何好處,只會額外增加同步journal文件的時間。比較標准的做法就是在Activity的onPause()方法中去調用一次flush()方法就可以了。
/* * 並不是每次寫入緩存都要調用一次flush()方法的,頻繁地調用並不會帶來任何好處,只會額外增加同步journal文件的時間,推薦在activity的onPause中調用 */ @Override protected void onPause() { // TODO 自動生成的方法存根 super.onPause(); try { mDiskCache.flush(); } catch (IOException e) { // TODO 自動生成的 catch 塊 e.printStackTrace(); } }
5.3 close()
/** * Closes this cache. Stored values will remain on the filesystem. */ public synchronized void close() throws IOException { if (journalWriter == null) { return; // already closed } for (Entry entry : new ArrayList<Entry>(lruEntries.values())) { if (entry.currentEditor != null) { entry.currentEditor.abort(); } } trimToSize(); journalWriter.close(); journalWriter = null; }
這個方法用於將DiskLruCache關閉掉,是和open()方法對應的一個方法。關閉掉了之后就不能再調用DiskLruCache中任何操作緩存數據的方法,通常只應該在Activity的onDestroy()方法中去調用close()方法。
5.4 delete()
/** * Closes the cache and deletes all of its stored values. This will delete * all files in the cache directory including files that weren't created by * the cache. */ public void delete() throws IOException { close(); IoUtils.deleteContents(directory); }
這個方法用於將所有的緩存數據全部刪除,比如手動清理緩存功能,其實只需要調用一下DiskLruCache的delete()方法就可以實現了。
六、journal文件
journal文件是日志文件,里面記錄了我們所有的操作。當我們寫入一次時,文件內容如下方左圖所示,如果寫入多次就是右圖了。
6.1 文件頭
6.2 文件體
文件重要的操作記錄是在文件體中的,里面有多個關鍵字。
DIRTY:表示這是一條臟數據。
每當我們調用一次DiskLruCache的edit()方法時,都會向journal文件中寫入一條DIRTY記錄,表示我們正准備寫入一條緩存數據,但不知結果如何。然后,調用commit()方法表示寫入緩存成功,這時會向journal中寫入一條CLEAN記錄,意味着這條“臟”數據被“洗干凈了”,調用abort()方法表示寫入緩存失敗,這時會向journal中寫入一條REMOVE記錄。
也就是說,每一行DIRTY的key,后面都應該有一行對應的CLEAN或者REMOVE的記錄,否則這條數據就是“臟”的,會被自動刪除掉。
CLEAN:表示這條數據存入成功。
CLEAN數據前應該有對應的一個DIRTY數據,當存入成功了就會產生這條CLEAN的日志,這個日志后面都會有一串數字。比如這里的152313,這表示緩存數據的大小,以字節為單位。
REMOVE:緩存失敗時產生的日志。
每個REMOVE日志之前應該都有一個DIRTY的日志,調用abort()方法,撤銷緩存時就產生了REMOVE日志。
READ:讀取文件的記錄。
每當我們調用get()方法去讀取一條緩存數據時,就會向journal文件中寫入一條READ記錄。
6.3 journal文件的大小
這個文件的大小可能會略大於你設置的最大緩存大小,這點在前面有說道。具體原因請自行查看源碼。當這個文件的日志過多時,DiskLruCache會開始整理日志文件。
/** * We only rebuild the journal when it will halve the size of the journal * and eliminate at least 2000 ops. */ private boolean journalRebuildRequired() { final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000; return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD && redundantOpCount >= lruEntries.size(); }
DiskLruCache中使用了一個redundantOpCount變量來記錄用戶操作的次數,每執行一次寫入、讀取或移除緩存的操作,這個變量值都會加1,當變量值達到2000的時候就會觸發重構journal的事件,這時會自動把journal中一些多余的、不必要的記錄全部清除掉,保證journal文件的大小始終保持在一個合理的范圍內。
快速構建工具類:
package com.kale.photoswall.util; import java.io.File; import java.io.IOException; import libcore.io.DiskLruCache; import android.content.Context; import android.content.pm.PackageInfo; import android.content.pm.PackageManager.NameNotFoundException; import android.graphics.Bitmap; import android.os.Environment; import com.android.volley.toolbox.NetworkImageView; public abstract class LruDiskBaseCache { protected DiskLruCache mDiskLruCache; public abstract void putDiskToView(NetworkImageView view, String url); public abstract void putBitmapToDisk(String url, Bitmap bitmap); /** * @description 初始化磁盤緩存 * */ protected LruDiskBaseCache(Context context, String diskCacheDir, int diskSize) { try { File cacheDir = getDiskCacheDir(context, diskCacheDir); if (!cacheDir.exists()) { cacheDir.mkdirs(); } mDiskLruCache = DiskLruCache.open(cacheDir, getAppVersion(context), 1, diskSize); } catch (IOException e) { e.printStackTrace(); } } /** * @description 得到程序的版本號 注意:每當版本號改變,緩存路徑下存儲的所有數據都會被清除掉 * * @param context * @return */ protected int getAppVersion(Context context) { try { PackageInfo info = context.getPackageManager().getPackageInfo(context.getPackageName(), 0); return info.versionCode; } catch (NameNotFoundException e) { e.printStackTrace(); } return 1; } /** * @description 獲得磁盤緩存的路徑 * 當SD卡存在或SD卡不可被移除的時,就用getExternalCacheDir()方法來獲取緩存路徑,否則就調用getCacheDir()方法來獲取緩存路徑。 * getExternalCacheDir()-> /sdcard/Android/data/<application * package>/cache getCacheDir() -> /data/data/<application package>/cache * @param context * @param uniqueName 緩存文件夾名稱,可以自定義。這里我們緩存的是bitmap,所以緩存文件夾名定義為:bitmap * @return */ private File getDiskCacheDir(Context context, String uniqueName) { String cachePath; if (Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState()) || !Environment.isExternalStorageRemovable()) { cachePath = context.getExternalCacheDir().getPath(); } else { cachePath = context.getCacheDir().getPath(); } return new File(cachePath + File.separator + uniqueName); } }
源碼下載:http://download.csdn.net/detail/shark0017/8400459
參考自:
http://www.eoeandroid.com/thread-277474-104-1.html
http://blog.csdn.net/sun1021976312/article/details/38611281
http://blog.csdn.net/linghu_java/article/details/8600053