在局部搜索算法中,我們不再關心從初始節點到目標節點之間的路徑,而是考慮從當前節點出發,移動到它的鄰近狀態,直到到達合理的目標狀態。相比於前面所說的無信息搜索算法和有信息搜索算法,局部搜索算法往往能以常數的空間復雜度(不用保存路徑)在很大甚至無限的狀態空間中找到合理解。
爬山法
爬山法不斷向值增加的方向移動,直到到達頂峰。
function HillClimbing(problem) returns a local maximum state
current_state = initial_state
loop do
next_state = the highest neighbor
if (next_state is higher than current_state)
current_state = next_state
else
return current_state
爬山法的問題在於它只能保證到達局部最大值,卻不能保證到達全局最大值。
比如我們從C點出發,那么我們會停在局部最大值A點,因此沒辦法到達全局最大值B點。
模擬退火算法
模擬退火算法與爬山法類似,只是我們不再一味地往值增加的方向移動,而是以一定的幾率容許往值減小的方向移動,從而使得我們有可能從局部最大值A點走出來,並到達全局最大值B點。
只所以叫做模擬退火,是因為一開始這個幾率相對較高,而隨着時間的增加,這個幾率則像溫度一樣慢慢減小。
function SimulatedAnnealing () returns a solution state
current_state = initial_state
for t = 1 to infinite do
T = schedule(t)
if T = 0 then
return current_state
next_state = a randomly selected neighbor
E = next_state.height - current_state.height
if E > 0 then
current_state = next_state
else
current_state = next_state with probability e^(E/T)
遺傳算法
遺傳算法模擬生物中的遺傳過程,從初始種群開始,迭代進行一系列雜交和變異直到獲得合適的種群,並從中挑選出最佳個體。
function GeneticAlgorithm(population, fitin) returns a solution state
inputs: population, a set of individuals
fitness, a function that measures fitness of an individual
repeat
new_population = empty_set
for i = 1 to sizeof(population) do
x = RandomSelect(population, fitness)
y = RandomSelect(population, fitness)
new_individual = Reproduce(x, y)
if (a probability) then
new_individual = Mutate(new_individual)
add new_individual to new_population
until some individuals are fit enough or time has elapsed
return the best individual in the population
----------------------------------------------------------------
function Reproduce(x, y) returns a new individual
inputs: x, y, the parents of the new individual
length = Length(x)
mutation_point = RandomSelectIn(1, length)
new_individual = Sub(x, 1, mutation_point)
+ Sub(y, mutation_point, length)
return new_individual