so, my app needs to find the dominant palette and the position in the image of the k-most dominant colors. I followed the very useful sample project from the vImage documentation
and the algorithm works fine although I can't wrap my head around how should I go on about and linking said colors with a point in the image. Since the algorithm works by filling storages first, I tried also filling an array of CGPoints called LocationStorage and working with that
//filling the array
for i in 0...width {
for j in 0...height {
locationStorage.append(
CGPoint(x: i, y: j))
}
.
.
.
//working with the array
let randomIndex = Int.random(in: 0 ..< width * height)
centroids.append(Centroid(red: redStorage[randomIndex],
green: greenStorage[randomIndex],
blue: blueStorage[randomIndex],
position: locationStorage[randomIndex]))
}
struct Centroid {
/// The red channel value.
var red: Float
/// The green channel value.
var green: Float
/// The blue channel value.
var blue: Float
/// The number of pixels assigned to this cluster center.
var pixelCount: Int = 0
var position: CGPoint = CGPointZero
init(red: Float, green: Float, blue: Float, position: CGPoint) {
self.red = red
self.green = green
self.blue = blue
self.position = position
}
}
although it's not accurate.
I also tried force trying every pixel in the image to get as close to each color but I think it's too slow.
What do you think my approach should be? Let me know if you need additional info Please be kind I'm learning Swift.