Question about memory utilization
In mission control I'm seeing on average 15% of memory being utilized. Should I consider my nodes to be over-provisioned in terms of memory, or do these memory counters not take into account memory used by OS file system caching?
I'm running in AWS on r3.x8large (250GB memory) which is more expensive than c3.8xlarge (60GB memory).
Mostly I'm doing ETL in this cluster with relatively low concurrent queries. Any insight into whether I should go for more nodes with 60 GB memory versus less nodes with 250GB memory?
I'm running in AWS on r3.x8large (250GB memory) which is more expensive than c3.8xlarge (60GB memory).
Mostly I'm doing ETL in this cluster with relatively low concurrent queries. Any insight into whether I should go for more nodes with 60 GB memory versus less nodes with 250GB memory?
0