Yh, seems so :
Also when you pay for the non free you get the option to host it on Windows Server, thus the EXTRA stability and recommended users... /sarcasm
Yh, seems so :
Also when you pay for the non free you get the option to host it on Windows Server, thus the EXTRA stability and recommended users... /sarcasm
I actually use it in production it just more mature version of OpenOffice Collabora thing, cause man only 1 look at there docker container and you know how much mess they are in:
https://hub.docker.com/r/collabora/code/tags/
Clearly someone does not understand docker and containers.
However Only Office have free + paid versions and it seems the free version is limited or recommended up to 25 user, I dont know what is the gimmick but
It does not support many languages but when you have many people worshiping EXCEL and love working on it on NAS, with all the shit of file locking, this comes as pretty neat option, especially since it supports fast mode and strict mode editing ( in strict when you edit a cell it wont allow other users to edit it as well untill 1st user saves then notification will go to 2nd user to reload the page ) it is alot more polished than Collabora.
Also inside the suite like word/spreadsheet you can chat with users, and importing of MS files are much smoother IMO
Hi actually I got it working without SSL, and I need to get it to SSL so next cloud other apps like keepass integration work...
However I reckon here is the issue, when you start docker container of Onlyoffice run it using the default command on their docker page, and when you got to NextCloud ensure that the OnlyOffice addon url is like:
http://192.168.1.x
It wont work if docker running on http and Next cloud on https and visa versa, but its better you made it working on https.
@scottalanmiller said in Was It the Last IT Guys Fault:
"Why are there no backups?", "What do you mean that the RAID array has been degraded for two years?", "How do you not have any passwords to these systems?", "Why are there three email servers here but email is hosted with Google
I laughed so much cause of those.
And yes I can totally relate, I start working with group of folks that got me after they purchased the server with 3 disks so RAID 5 it is, I dont like it I wish I was more involved in the process and had more time to order disk, however since it was SAS disk, and harder to find where I live, we went live with 3 disks and RAID 5... And i dont like it one bit, I wish it was 4 disks and RAID 10.
And now after we have gone in production I dont dare to touch or change that array, now when I leave this will be on me for sure, but its not my fault.
okay, we will look past the fact that you kidnapped someone.
try using a thick towel, or I see in the movies they use separate hardware device for that.
Also I try to edit or change it As Large as Possible, it turns to 256 GB
Plz watch the video above, Im playing with Centos 7 BTRFS new installation on 4 TB disks (using virtualbox)
My goal is to create RAID 10 BTRFS on those 4 drives, but no matter what I do I get volume with 1 TB worth of storage, that seems to me like RAID 1 would do ...
I should get 2 TB worth of real storage.. right ?
I understand BTRFS Raid is different than the typical RAID we are accustomed to, but is it that different ???
I even tried with 5 disks now, same thing. Cause I thought perhaps it calculates system drive or boot disk thus the issue/
@scottalanmiller said in Hyper-converged infrastructure (I missed the storage part):
node mirror. The storage and the hypervisor and the management are all really one thing. Very
Yeah I am, I like the fact that they use KVM, and explain it by saying stand alone ESXi truly became replaceable, and VMware knows that, that is why they are pushing in backup/replication/storage/network but not the core virtualization. Cause as I believe we reached the point where KVM/Xen can do just as good performance in running virtual machines.
But its the other stuff that is important now.
And we no longer can say stuff like ohh well I have ran ESXi for 10 years and I never have problem, cause the same will apply for KVM, it just people are not using it that much.
I thought I saw DRBL, and I was like YAY I know that one cause I worked on CloneZilla project long time ago.
But alas it is DRBD, no I am not but reading about it more now.
TIGHT
I love working on those places where you have to come up with big solutions for little cost, really shows you how the other side of the world is doing things and short cutting on cost.
Afterwards you can watch everything tumble to the ground ... kidding.
This is what I was looking for.
Cause when I heard about what they are doing and how this is done = دخلت بل حيط
It is an arabic saying literary meaning that you ran into brick wall
I no longer was able to compute, Im aware that I dont know everything but this method of virtualization storage in VMs then sharing it to other VMs appeared to me very complex yet I was very curios and wished to attempt it.
I reckon it is better to go back a step and focus on more realistic goals.
Okay I am understanding more and more about this.
But can somebody answer me the following:
Is it normal for someone to install hyper-visor and virtualize storage on the same server ?
Why sometimes I hear about users running BTRFS or ZFS insides VMs and then allocating that storage to other VMs. Is this normal ?
Thanks, still watching it.
Hello,
I'm trying to wrap my head around this idea, which I know have existed for so long, but only recently I have understand it more.
As Wikipedia defines it :
Hyper-converged infrastructure (HCI, also called a hyper-converged integrated system. HCIS) refers to integrating virtualization of storage and computing in a data center.[1]
For me now, virtualizing computing is the easy part, cause I reckon they mean as VMs with CPU and RAM, ...etc
I played a little with iSCSI and understood more on how storage can be remotely allocated via Asustor As1002T NAS, which actually had this functionality which was surprising being an affordable NAS.
But I still dont know how the PROs do it, currently I use the storage build in the server to store the VMs (DAS) I reckon if I understood this correctly Hyper-converged infrastructure must be that the storage be residing on different server, and the VMs stored on the storage server ? is that what they call Hyper-converged infrastructure, cause it allows more flexibility .
And can I do this with a home build NAS, like bunch of cheap disks and on top of them something like FreeNas or Openmediavaullt or openfiler (what do you recommend), and be storage server, then another server be KVM server, and the KVM will have the VMs located on the storage server would that qualify as Hyper-converged infrastructure
Someone hit me with some sense
The web UI benefit is that it allowed me to understand why everytime I launch container it spawned a new one and made new generic name for it, and helped me understand more visually.
However You can do much more from it, than stop/start/restart/commit containers, and monitoring them. When it comes to doing edits its advised you do everything from command.
I actually never heard of supervisord until you mentioned it, thus I ran the default which I reckon is each process in a separate container and never had any issues yet.
For me the important thing I learned exploring docker is distinction between application containers and OS containers, and how to approach each.
Since you mentioned centos in docker, you might want to check Alpine Linux images, cause they fit the Just enough OS ideology of docker (or was that VMware Photon).
@msff-amman-Itofficer
Sorry, all working 100%
Thank you very much
I should sleep now, its way too late thus making those small mistakes
I simply didn't copy the
),
with the rest of the commands.
cause I already saw it in the file, but after a second look of your config, I realized it was in the file as
);
@scottalanmiller said in How to tune or optimize MySQL or MariaDB ?:
Integrated databases are often much faster. Especially at small sizes. If you are heavily constrained, I would expect SQLite to be faster since the RDBMS doesn't need to be loaded and no network overhead is needed.
I guess this is it really then, to be honest it is more like 3 seconds, but with the integrated database it was very fast, thus I noticed it.
using free -m I am using 512 of 2GB ram