Is the file system accessible across VPCs?
You can add multiple VPCs to realize the access across VPCs, but only the access is supported in the same region. For example, the file system of Hong Kong 2 can only add the VPCs of Hong Kong 2, not the VPCs of other resource pools.
You can add up to 20 VPCs to a single file system. By adding the VPC of the cloud server performing the access to the file system, you can realize the VPC access. For details about the adding method, see Adding VPC.
How many clients can a single file system mount on?
There is currently no upper limit. However, we recommend that you mount no more than 1,000 clients on a single file system because too many clients may cause mounting failure. You can use multiple file systems to store service data to spread stress.
Can file systems be mounted across regions?
Not supported now. File systems can only be mounted on cloud servers belonging to the same VPC in the same region. For example, the file system of Hong Kong 2 can only be mounted on the cloud server of Hong Kong 2, and cannot be mounted on the cloud server of other resource pools.
Does the file system support multi-AZ cross access?
Yes. VPCs in the same region do not distinguish between AZs. By adding the VPC where the cloud server is located to the file system, that is, belonging to the same VPC, the file system supports mounting across AZs in the same region, thereby realizing multi-AZ cross access.
For example, a file system created in AZ 1 can be mounted on a cloud server in AZ 2 belonging to the same VPC in the same region to realize file sharing and access across AZs. For details, see Mounting a File Systems Across AZs.
What should I do if the execution command gets stuck in the mount directory of the deleted file system?
To resolve this issue, follow the steps below:
1. First, you need to edit the /etc/rc.local or /etc/fstab file and comment out the configuration of the file system. This ensures that the file system is not automatically mounted upon server restarts.
2. Next, you need to restart the server to make the changes take effect.
3. We recommend that you unmount the file system in the operating system before deleting an instance of the file system. The specific unmounting steps depend on the operating system and file system type you are using.
4. If you have also enabled the auto-mount configuration, you need to remove or change the auto-mount setting to prevent the auto-mount file system from automatically mounting.
How do I create and mount scalable file subdirectories in a Linux virtual machine?
Prerequisites: You have successfully mounted the SFS to the ECS Linux virtual machine. For example, the mount path is: /mnt/dir. You can create the scalable file subdirectory under the /mnt/dir directory.
Solution:
1. Create a subdirectory of the file system in the Linux ECS: mkdir /mnt/dir/subdir
2. Create a local directory for mounting the file system: mkdir /tmp/mnt
3. Re-mount the file system:
mount -t nfs -o vers=3,proto=tcp,async,nolock,noatime,nodiratime,wsize=1048576,rsize=1048576,timeo=600,actimeo=0 Mount address: /mnt/dir/subdir /tmp/mnt
How do I address the Linux server exception caused by the mis-deletion of the mount point?
Issue Description: In a Linux operating system, assume that an SFS is mounted through a mount point and the mount point is deleted on the scalable file console, resulting in exceptions such as freezing when running the command and no response in the Linux system.
Solution:
1. In the Linux virtual server, press Ctrl+C to interrupt the currently running command.
2. Run the mount command to view the mount information. Fetch the current mount path through the mount information such as /mnt/test.
3. Run the umount-f/mnt/data command to force the file system to unmount.
4. After the unmount is complete, you can recreate the mount point and attempt to remount the file system. The above solution can help you resolve Linux system exceptions caused by the deletion of mount points. Make sure you unmount the file system before attempting to remount it.
Concurrent writing by multiple processes or clients to the same file can lead to data exceptions. How can this situation be avoided?
Issue Description: SFS provides the capability for multiple clients to share read/write files. However, in scenarios where multiple processes or clients concurrently write to the same file (such as concurrent writing to a log file), the lack of support for atomic append operations in the NFS protocol itself may result in exceptions such as write overwrite, crossing, and serialization.
Solution: Save the data written by different processes or clients in separate files and then merge them during subsequent analysis and processing stages. This scheme effectively addresses issues arising from concurrent writes and eliminates the need for file locks, minimizing performance impact.
For scenarios involving concurrent appending writes to the same file (e.g., a log), the flock+seek mechanism can ensure the atomicity and consistency of writes. However, the flock+seek operation is time-consuming, and may have a significant impact on performance.
Why do two ECSs have different file owners when querying the same file in the SFS?
In the file system, the identification of a user's identity is not determined by the username, but by the UID (User Identity). When querying a file for a primary user name in an ECS instance, it is obtained by converting the UID information to the corresponding user name. If the same UID is converted to different usernames in different ECS instances, these usernames are treated as different ownership identities.
What should I do if mount.nfs: No such device is returned when mounting the SFS through the NFS?
Issue Description: The message mount.nfs: No such device appears when I mount the NAS of the NFS file system in the ECS instance.
Cause: Check whether the sunrpc and nfs modules are loaded correctly;
Solution (sunrpc):
1. Run lsmod|grep sunrp to determine if the sunrpc module is loaded successfully.
2. Check whether /etc/modprobe.d/sunrpc.conf is configured correctly.
3. Run modprobe sunrpc to reload sunrpc.
4. Re-mount the NFS file system.
Solution (nfs):
1. Run lsmod|grep nfs to determine whether the nfs module is loaded successfully.
2. If the output is empty, it means that nfs is not loaded successfully.
3. Reload nfs-utils.
4. Re-mount the NFS file system.
Why can the mounted CIFS scalable file directory be visible to the Administrator but not to other users?
On a Windows system, a directory mounted by one user does not appear in another user's login interface based on the user isolation mechanism of Windows. To achieve sharing among multiple users, you can create a directory link and then associate it. By doing so, the shared directory can be accessed in the login interfaces of different users.
Run the following command to create a directory link called myshare under drive C and point it to the mount address, which can be obtained at the top of the File System Details page.
mklink /D C:\myshare Mount address